Skip to main content

2009 | Buch

Theory of Cryptography

6th Theory of Cryptography Conference, TCC 2009, San Francisco, CA, USA, March 15-17, 2009. Proceedings

insite
SUCHEN

Über dieses Buch

TCC 2009, the 6th Theory of Cryptography Conference, was held in San Fr- cisco, CA, USA, March 15–17, 2009. TCC 2009 was sponsored by the Inter- tional Association for Cryptologic Research (IACR) and was organized in - operation with the Applied Crypto Group at Stanford University. The General Chair of the conference was Dan Boneh. The conference received 109 submissions, of which the Program Comm- tee selected 33 for presentation at the conference. These proceedings consist of revised versions of those 33 papers. The revisions were not reviewed, and the authors bear full responsibility for the contents of their papers. The conference program also included two invited talks: “The Di?erential Privacy Frontier,” given by Cynthia Dwork and “Some Recent Progress in Lattice-Based Crypt- raphy,” given by Chris Peikert. I thank the Steering Committee of TCC for entrusting me with the resp- sibility for the TCC 2009 program. I thank the authors of submitted papers for their contributions. The general impression of the Program Committee is that the submissions were of very high quality, and there were many more papers we wanted to accept than we could. The review process was therefore very - warding but the selection was very delicate and challenging. I am grateful for the dedication, thoroughness,and expertise ofthe ProgramCommittee. Obse- ing the way the members of the committee operated makes me as con?dent as possible of the outcome of our selection process.

Inhaltsverzeichnis

Frontmatter
An Optimally Fair Coin Toss

We address one of the foundational problems in cryptography: the bias of coin-flipping protocols. Coin-flipping protocols allow mutually distrustful parties to generate a common unbiased random bit, guaranteeing that even if one of the parties is malicious, it cannot significantly bias the output of the honest party. A classical result by Cleve [STOC ’86] showed that for any two-party

r

-round coin-flipping protocol there exists an efficient adversary that can bias the output of the honest party by

Ω

(1/

r

). However, the best previously known protocol only guarantees

$O(1/\sqrt{r})$

bias, and the question of whether Cleve’s bound is tight has remained open for more than twenty years.

In this paper we establish the optimal trade-off between the round complexity and the bias of two-party coin-flipping protocols. Under standard assumptions (the existence of oblivious transfer), we show that Cleve’s lower bound is tight: we construct an

r

-round protocol with bias

O

(1/

r

).

Tal Moran, Moni Naor, Gil Segev
Complete Fairness in Multi-party Computation without an Honest Majority

Gordon et al. recently showed that certain (non-trivial) functions can be computed with complete fairness in the

two-party

setting. Motivated by their results, we initiate a study of complete fairness in the

multi-party

case and demonstrate the first completely-fair protocols for non-trivial functions in this setting. We also provide evidence that achieving fairness is “harder” in the multi-party setting, at least with regard to round complexity.

S. Dov Gordon, Jonathan Katz
Fairness with an Honest Minority and a Rational Majority

We provide a simple protocol for secret reconstruction in any threshold secret sharing scheme, and prove that it is

fair

when executed with many

rational

parties together with a small minority of

honest

parties. That is, all parties will learn the secret with high probability when the honest parties follow the protocol and the rational parties act in their own self-interest (as captured by a set-Nash analogue of trembling hand perfect equilibrium). The protocol only requires a

standard

(synchronous) broadcast channel, tolerates both early stopping and incorrectly computed messages, and only requires 2 rounds of communication.

Previous protocols for this problem in the cryptographic or economic models have either required an honest majority, used strong communication channels that enable simultaneous exchange of information, or settled for approximate notions of security/equilibria. They all also required a nonconstant number of rounds of communication.

Shien Jin Ong, David C. Parkes, Alon Rosen, Salil Vadhan
Purely Rational Secret Sharing (Extended Abstract)

Rational secret sharing is a problem at the intersection of cryptography and game theory. In essence, a dealer wishes to engineer a communication game that, when rationally played, guarantees that each of the players learns the dealer’s secret. Yet, all solutions proposed so far did not rely solely on the players’ rationality, but also on their

beliefs

, and were also

quite inefficient.

After providing a more complete definition of the problem, we exhibit a very efficient and purely rational solution to it with a verifiable trusted channel.

Silvio Micali, abhi shelat
Some Recent Progress in Lattice-Based Cryptography

The past decade in computer science has witnessed tremendous progress in the understanding of

lattices

, which are a rich source of seemingly hard computational problems. One of their most promising applications is to the design of cryptographic schemes that enjoy exceptionally strong security guarantees and other desirable properties.

Most notably, these schemes can be proved secure assuming only the

worst-case

hardness of well-studied lattice problems. Additionally, and in contrast with number-theoretic problems typically used in cryptography, the underlying problems have so far resisted attacks by

subexponential-time

and

quantum

algorithms. Yet even with these security advantages, lattice-based schemes also tend to be remarkably simple, asymptotically efficient, and embarrassingly parallelizable.

This tutorial will survey the foundational results of the area, as well as some more recent developments. Our particular focus will be on the core hard cryptographic (average-case) problems, some recurring techniques and abstractions, and a few notable applications.

Chris Peikert
Non-malleable Obfuscation

Existing definitions of program obfuscation do not rule out malleability attacks, where an adversary that sees an obfuscated program is able to generate another (potentially obfuscated) program that is related to the original one in some way.

We formulate two natural flavors of non-malleability requirements for program obfuscation, and show that they are incomparable in general. We also construct non-malleable obfuscators of both flavors for some program families of interest. Some of our constructions are in the Random Oracle model, whereas another one is in the common reference string model. We also define the notion of verifiable obfuscation which is of independent interest.

Ran Canetti, Mayank Varia
Simulation-Based Concurrent Non-malleable Commitments and Decommitments

In this paper we consider commitment schemes that are secure against

concurrent

man-in-the-middle (cMiM) attacks. Under such attacks, two possible notions of security for commitment schemes have been proposed in the literature: concurrent non-malleability with respect to

commitment

and concurrent non-malleability with respect to

decommitment

(i.e., opening).

After the original notion of non-malleability introduced by [Dolev, Dwork and Naor STOC 91] that is based on the independence of the committed messages, a new and stronger simulation-based notion of non-malleability has been proposed with respect to openings or with respect to commitment [1,2,3,4] by requiring that for any man-in-the-middle adversary there is a stand-alone adversary that succeeds with the same probability. When commitment schemes are used as sub-protocols (which is often the case) the simulation-based notion is much more powerful and simplifies the task of proving the security of the larger protocols.

The main result of this paper is a commitment scheme that is simulation-based concurrent non-malleable with respect to both commitment and

decommitment

. This property protects against cMiM attacks mounted during both commitments and decommitments which is a crucial security requirement in several applications, as in some digital auctions, in which players have to perform both commitments and decommitments. Our scheme uses a constant number of rounds of interaction in the plain model and is the first scheme that enjoys all these properties under the simulation-based definitions.

Rafail Ostrovsky, Giuseppe Persiano, Ivan Visconti
Proofs of Retrievability via Hardness Amplification

Proofs of Retrievability (PoR)

, introduced by Juels and Kaliski [JK07], allow the client to store a file

F

on an untrusted server, and later run an efficient audit protocol in which the server proves that it (still) possesses the client’s data. Constructions of PoR schemes attempt to minimize the client and server storage, the communication complexity of an audit, and even the number of file-blocks accessed by the server during the audit. In this work, we identify several different variants of the problem (such as bounded-use vs. unbounded-use, knowledge-soundness vs. information-soundness), and giving nearly optimal PoR schemes for each of these variants. Our constructions either improve (and generalize) the prior PoR constructions, or give the first known PoR schemes with the required properties. In particular, we

Formally prove the security of an (optimized) variant of the bounded-use scheme of Juels and Kaliski [JK07], without making any simplifying assumptions on the behavior of the adversary.

Build the first unbounded-use PoR scheme where the communication complexity is linear in the security parameter and which does not rely on Random Oracles, resolving an open question of Shacham and Waters [SW08].

Build the first bounded-use scheme with

information-theoretic

security.

The main insight of our work comes from a simple connection between PoR schemes and the notion of

hardness amplification

, extensively studied in complexity theory. In particular, our improvements come from first abstracting a purely information-theoretic notion of

PoR codes

, and then building nearly optimal PoR codes using state-of-the-art tools from coding and complexity theory.

Yevgeniy Dodis, Salil Vadhan, Daniel Wichs
Security Amplification for Interactive Cryptographic Primitives

Security amplification is an important problem in Cryptography: starting with a “weakly secure” variant of some cryptographic primitive, the goal is to build a “strongly secure” variant of the same primitive. This question has been successfully studied for a variety of important cryptographic primitives, such as one-way functions, collision-resistant hash functions, encryption schemes and weakly verifiable puzzles. However, all these tasks were non-interactive. In this work we study security amplification of

interactive

cryptographic primitives, such as message authentication codes (MACs), digital signatures (SIGs) and pseudorandom functions (PRFs). In particular, we prove direct product theorems for MACs/SIGs and an XOR lemma for PRFs, therefore obtaining nearly optimal security amplification for these primitives.

Our main technical result is a new Chernoff-type theorem for what we call

Dynamic Weakly Verifiable Puzzles

, which is a generalization of ordinary Weakly Verifiable Puzzles which we introduce in this paper.

Yevgeniy Dodis, Russell Impagliazzo, Ragesh Jaiswal, Valentine Kabanets
Composability and On-Line Deniability of Authentication

Protocols for

deniable authentication

achieve seemingly paradoxical guarantees: upon completion of the protocol the receiver is convinced that the sender authenticated the message, but neither party can convince anyone else that the other party took part in the protocol. We introduce and study

on-line deniability

, where deniability should hold even when one of the parties colludes with a third party during execution of the protocol. This turns out to generalize several realistic scenarios that are outside the scope of previous models.

We show that a protocol achieves our definition of on-line deniability if and only if it realizes the message authentication functionality in the

generalized universal composability

framework; any protocol satisfying our definition thus automatically inherits strong composability guarantees. Unfortunately, we show that our definition is impossible to realize in the PKI model if adaptive corruptions are allowed (even if secure erasure is assumed). On the other hand, we show feasibility with respect to static corruptions (giving the first separation in terms of

feasibility

between the static and adaptive setting), and show how to realize a relaxation termed

deniability with incriminating abort

under adaptive corruptions.

Yevgeniy Dodis, Jonathan Katz, Adam Smith, Shabsi Walfish
Authenticated Adversarial Routing

The aim of this paper is to demonstrate the feasibility of authenticated throughput-efficient routing in an unreliable and dynamically changing synchronous network in which the majority of malicious insiders try to destroy and alter messages or disrupt communication in any way. More specifically, in this paper we seek to answer the following question: Given a network in which the majority of nodes are controlled by a node-controlling adversary and whose topology is changing every round, is it possible to develop a protocol with polynomially-bounded memory per processor that guarantees throughput-efficient and correct end-to-end communication? We answer the question affirmatively for extremely general corruption patterns: we only request that the topology of the network and the corruption pattern of the adversary leaves at least one path each round connecting the sender and receiver through honest nodes (though this path may change at every round). Out construction works in the public-key setting and enjoys bounded memory per processor (that is polynomial in the network size and does not depend on the amount of traffic). Our protocol achieves

optimal transfer rate

with negligible decoding error. We stress that our protocol assumes no knowledge of which nodes are corrupted nor which path is reliable at any round, and is also fully distributed with nodes making decisions locally, so that they need not know the topology of the network at any time.

The optimality that we prove for our protocol is very strong. Given any routing protocol, we evaluate its efficiency (rate of message delivery) in the “worst case,” that is with respect to the

worst

possible graph and against the

worst

possible (polynomially bounded) adversarial strategy (subject to the above mentioned connectivity constraints). Using this metric, we show that there does not exist

any

protocol that can be asymptotically superior (in terms of throughput) to ours in this setting.

We remark that the aim of our paper is to demonstrate via explicit example the feasibility of throughput-efficient authenticated adversarial routing. However, we stress that out protocol is

not

intended to provide a practical solution, as due to its complexity, no attempt thus far has been made to reduce constants and memory requirements.

Our result is related to recent work of Barak, Goldberg and Xiao in 2008 [9] who studied fault localization in networks assuming a private-key trusted setup setting. Our work, in contrast, assumes a public-key PKI setup and aims at not only fault localization, but also transmission optimality. Among other things, our work answers one of the open questions posed in the Barak et. al. paper regarding fault localization on multiple paths. The use of a public-key setting to achieve strong error-correction results in networks was inspired by the work of Micali, Peikert, Sudan and Wilson [14] who showed that classical error-correction against a polynomially-bounded adversary can be achieved with surprisingly high precision. Our work is also related to an interactive coding theorem of Rajagopalan and Schulman [15] who showed that in noisy-edge static-topology networks a constant overhead in communication can also be achieved (provided none of the processors are malicious), thus establishing an optimal-rate routing theorem for static-topology networks.

Finally, our work is closely related and builds upon to the problem of End-To-End Communication in distributed networks, studied by Afek and Gafni [1], Awebuch, Mansour, and Shavit [8], and Afek, Awerbuch, Gafni, Mansour, Rosen, and Shavit[2] , though none of these papers consider or ensure correctness in the setting of a node-controlling adversary that may corrupt the majority of the network.

Yair Amir, Paul Bunn, Rafail Ostrovsky
Adaptive Zero-Knowledge Proofs and Adaptively Secure Oblivious Transfer

In the setting of secure computation, a set of parties wish to securely compute some function of their inputs, in the presence of an adversary. The adversary in question may be

static

(meaning that it controls a predetermined subset of the parties) or

adaptive

(meaning that it can choose to corrupt parties during the protocol execution and based on what it sees). In this paper, we study two fundamental questions relating to the basic zero-knowledge and oblivious transfer protocol problems:

Adaptive zero-knowledge proofs:

We ask whether it is possible to construct adaptive zero-knowledge

proofs

(with unconditional soundness). Beaver (STOC 1996) showed that known zero-knowledge proofs are not adaptively secure, and in addition showed how to construct zero-knowledge arguments (with computational soundness).

Adaptively secure oblivious transfer:

All known protocols for adaptively secure oblivious transfer rely on seemingly stronger hardness assumptions than for the case of static adversaries. We ask whether this is inherent, and in particular, whether it is possible to construct adaptively secure oblivious transfer from enhanced trapdoor permutations alone.

We provide surprising answers to the above questions, showing that achieving adaptive security is sometimes harder than achieving static security, and sometimes not. First, we show that assuming the existence of one-way functions only, there exist adaptive zero-knowledge proofs for all languages in

$\cal {NP}$

. In order to prove this, we overcome the problem that all adaptive zero-knowledge protocols known until now used equivocal commitments (which would enable an all-powerful prover to cheat). Second, we prove a black-box separation between adaptively secure oblivious transfer and enhanced trapdoor permutations. As a corollary, we derive a black-box separation between adaptively and statically securely oblivious transfer. This is the first black-box separation to relate to adaptive security and thus the first evidence that it is indeed harder to achieve security in the presence of adaptive adversaries than in the presence of static adversaries.

Yehuda Lindell, Hila Zarosim
On the (Im)Possibility of Key Dependent Encryption

We study the possibility of constructing encryption schemes secure under messages that are chosen depending on the key

k

of the encryption scheme itself. We give the following separation results that hold both in the private and in the public key settings:

Let

$\mathcal{H}$

be the family of poly(

n

)-wise independent hash-functions. There exists no fully-black-box reduction from an encryption scheme secure against key-dependent messages to one-way permutations (and also to families of trapdoor permutations) if the adversary can obtain encryptions of

h

(

k

) for

$h \in \mathcal{H}$

.

There exists no reduction from an encryption scheme secure against key-dependent messages to, essentially,

any

cryptographic assumption, if the adversary can obtain an encryption of

g

(

k

) for an

arbitrary

g

, as long as the reduction’s proof of security treats both the adversary and the function

g

as black boxes.

Iftach Haitner, Thomas Holenstein
On the (Im)Possibility of Arthur-Merlin Witness Hiding Protocols

The concept of

witness-hiding

suggested by Feige and Shamir is a natural relaxation of zero-knowledge. In this paper we identify languages and distributions for which many known constant-round public-coin protocols with negligible soundness cannot be shown to be witness-hiding using black-box techniques. One particular consequence of our results is that parallel repetition of either 3-Colorability or Hamiltonicity cannot be shown to be witness hiding with respect to some probability distribution over the inputs assuming that:

the distribution assigns positive probability only to instances with

exactly one

witness.

Polynomial size circuits cannot find a witness with noticeable probability on a random input chosen according to the distribution.

The proof of security relies on a black-box reduction that is

independent

of the choice of the commitment scheme used in the protocol.

These impossibility results conceptually match results of Feige and Shamir that use such black-box reductions to show that parallel repetition of 3-Colorability or Hamiltonicity

is

witness-hiding for distributions with “two independent witnesses”.

We also consider black-box reductions for parallel repetition of 3-Colorability or Hamiltonicity that

depend

on a specific implementation of the commitment scheme. While we cannot rule out such reductions completely, we show that “natural reductions” cannot bypass the limitations above.

Our proofs use techniques developed by Goldreich and Krawczyk for the case of zero knowledge. The setup of witness-hiding, however, presents new technical and conceptual difficulties that do not arise in the zero-knowledge setting. The high level idea is that if a black-box reduction establishes the witness-hiding property for a protocol, and the protocol also happens to be a proof of knowledge, then this latter property can be actually used “against the reduction” to find witnesses unconditionally.

Iftach Haitner, Alon Rosen, Ronen Shaltiel
Secure Computability of Functions in the IT Setting with Dishonest Majority and Applications to Long-Term Security

While general secure function evaluation (SFE) with information-theoretical (IT) security is infeasible in presence of a corrupted majority in the standard model, there are SFE protocols (Goldreich et al. [STOC’87]) that are computationally secure (without fairness) in presence of an actively corrupted majority of the participants. Now, computational assumptions can usually be well justified at the time of protocol execution. The concern is rather a potential violation of the privacy of sensitive data by an attacker whose power increases over time. Therefore, we ask which functions can be computed with long-term security, where we admit computational assumptions for the duration of a computation, but require IT security (privacy) once the computation is concluded.

Towards a combinatorial characterization of this class of functions, we also characterize the classes of functions that can be computed IT securely in the authenticated channels model in presence of passive, semi-honest, active, and quantum adversaries.

Robin Künzler, Jörn Müller-Quade, Dominik Raub
Complexity of Multi-party Computation Problems: The Case of 2-Party Symmetric Secure Function Evaluation

In symmetric secure function evaluation (SSFE), Alice has an input

x

, Bob has an input

y

, and both parties wish to securely compute

f

(

x

,

y

). We show several new results classifying the feasibility of securely implementing these functions in several security settings. Namely, we give new alternate characterizations of the functions that have (statistically) secure protocols against passive and active (standalone), computationally unbounded adversaries. We also show a strict, infinite hierarchy of complexity for SSFE functions with respect to universally composable security against unbounded adversaries. That is, there exists a sequence of functions

f

1

,

f

2

, ... such that there exists a UC-secure protocol for

f

i

in the

f

j

-hybrid world if and only if

i

 ≤ 

j

.

The main new technical tool that unifies our unrealizability results is a powerful protocol simulation theorem, which may be of independent interest. Essentially, in any adversarial setting (UC, standalone, or passive),

f

is securely realizable if and only if a very simple (deterministic) “canonical” protocol for

f

achieves the desired security. Thus, to show that

f

is unrealizable, one need simply demonstrate a single attack on a single simple protocol.

Hemanta K. Maji, Manoj Prabhakaran, Mike Rosulek
Realistic Failures in Secure Multi-party Computation

In secure multi-party computation, the different ways in which the adversary can control the corrupted players are described by different corruption types. The three most common corruption types are active corruption (the adversary has full control over the corrupted player), passive corruption (the adversary sees what the corrupted player sees) and fail-corruption (the adversary can force the corrupted player to crash

irrevocably

). Because fail-corruption is inadequate for modeling recoverable failures, the so-called omission corruption was proposed and studied mainly in the context of Byzantine Agreement (BA). It allows the adversary to selectively block messages sent from and to the corrupted player, but without actually seeing the message.

In this paper we propose a modular study of omission failures in MPC, by introducing the notions of

send-omission

(the adversary can selectively block outgoing messages) and

receive-omission

(the adversary can selectively block incoming messages) corruption. We provide security definitions for protocols tolerating a threshold adversary who can actively, receive-omission, and send-omission corrupt up to

t

a

,

t

ρ

, and

t

σ

players, respectively. We show that the condition 3

t

a

 + 

t

ρ

 + 

t

σ

< 

n

is necessary and sufficient for perfectly secure MPC tolerating such an adversary. Along the way we provide perfectly secure protocols for BA under the same bound. As an implication of our results, we show that an adversary who actively corrupts up to

t

a

players and omission corrupts (according to the already existing notion) up to

t

ω

players can be tolerated for perfectly secure MPC if 3

t

a

 + 2

t

ω

< 

n

. This significantly improves a result by Koo in TCC 2006.

Vassilis Zikas, Sarah Hauser, Ueli Maurer
Secure Arithmetic Computation with No Honest Majority

We study the complexity of securely evaluating arithmetic circuits over finite rings. This question is motivated by natural secure computation tasks. Focusing mainly on the case of

two-party

protocols with security against

malicious

parties, our main goals are to: (1) only make black-box calls to the ring operations and standard cryptographic primitives, and (2) minimize the number of such black-box calls as well as the communication overhead.

We present several solutions which differ in their efficiency, generality, and underlying intractability assumptions. These include:

An

unconditionally secure

protocol in the OT-hybrid model which makes a black-box use of an arbitrary ring

R

,but where the number of ring operations grows linearly with (an upper bound on) log|

R

|.

Computationally secure protocols in the OT-hybrid model which make a black-box use of an underlying ring, and in which the number of ring operations does not grow with the ring size. The protocols rely on variants of previous intractability assumptions related to linear codes. In the most efficient instance of these protocols, applied to a suitable class of fields, the (amortized) communication cost is a constant number of field elements per multiplication gate and the computational cost is dominated by

O

(log

k

) field operations per gate, where

k

is a security parameter. These results extend a previous approach of Naor and Pinkas for secure polynomial evaluation (

SIAM J. Comput.

, 2006).

A protocol for the rings ℤ

m

 = ℤ/

m

ℤ which only makes a black-box use of a homomorphic encryption scheme. When

m

is prime, the (amortized) number of calls to the encryption scheme for each gate of the circuit is constant.

All of our protocols are in fact

UC-secure

in the OT-hybrid model and can be generalized to

multiparty

computation with an arbitrary number of malicious parties.

Yuval Ishai, Manoj Prabhakaran, Amit Sahai
Universally Composable Multiparty Computation with Partially Isolated Parties

It is well known that universally composable multiparty computation cannot, in general, be achieved in the standard model without setup assumptions when the adversary can corrupt an arbitrary number of players. One way to get around this problem is by having a

trusted third party

generate some global setup such as a

common reference string (CRS)

or a

public key infrastructure (PKI)

. The recent work of Katz shows that we may instead rely on physical assumptions, and in particular

tamper-proof hardware tokens

. In this paper, we consider a similar but

strictly weaker

physical assumption. We assume that a player (Alice) can

partially isolate

another player (Bob) for a brief portion of the computation and prevent Bob from communicating more than some limited number of bits with the environment. For example, isolation might be achieved by asking Bob to put his functionality on a tamper-proof hardware token and assuming that Alice can prevent this token from communicating to the outside world. Alternatively, Alice may interact with Bob directly but in a special office which she administers and where there are no high-bandwidth communication channels to the outside world. We show that, under

standard

cryptographic assumptions, such physical setup can be used to UC-realize any two party and multiparty computation in the presence of an active and

adaptive

adversary corrupting any number of players. We also consider an alternative scenario, in which there are some trusted third parties but no single such party is trusted by all of the players. This compromise allows us to significantly limit the use of the physical set-up and hence might be preferred in practice.

Ivan Damgård, Jesper Buus Nielsen, Daniel Wichs
Oblivious Transfer from Weak Noisy Channels

Various results show that oblivious transfer can be implemented using the assumption of

noisy channels

. Unfortunately, this assumption is not as weak as one might think, because in a cryptographic setting, these noisy channels must satisfy very strong security requirements.

Unfair noisy channels

, introduced by Damgård, Kilian and Salvail [Eurocrypt ’99], reduce these limitations: They give the adversary an unfair advantage over the honest player, and therefore weaken the security requirements on the noisy channel. However, this model still has many shortcomings: For example, the adversary’s advantage is only allowed to have a very special form, and no error is allowed in the implementation.

In this paper we generalize the idea of unfair noisy channels. We introduce two new models of cryptographic noisy channels that we call the

weak erasure channel

and the

weak binary symmetric channel

, and show how they can be used to implement oblivious transfer. Our models are more general and use much weaker assumptions than unfair noisy channels, which makes implementation a more realistic prospect. For example, these are the first models that allow the parameters to come from experimental evidence.

Jürg Wullschleger
Composing Quantum Protocols in a Classical Environment

We propose a general security definition for cryptographic quantum protocols that implement classical non-reactive two-party tasks. The definition is expressed in terms of simple quantum- information-theoretic conditions which must be satisfied by the protocol to be secure. The conditions are uniquely determined by the ideal functionality

$\mathcal{F}$

defining the cryptographic task to be implemented. We then show the following composition result. If quantum protocols

π

1

,...,

π

securely implement ideal functionalities

$\mathcal{F}_1,\ldots,\mathcal{F}_\ell$

according to our security definition, then any purely

classical

two-party protocol, which makes sequential calls to

$\mathcal{F}_1,\ldots,\mathcal{F}_\ell$

, is equally secure as the protocol obtained by replacing the calls to

$\mathcal{F}_1,\ldots,\mathcal{F}_\ell$

with the respective quantum protocols

π

1

,...,

π

. Hence, our approach yields the minimal security requirements which are strong enough for the typical use of quantum protocols as subroutines within larger classical schemes. Finally, we show that recently proposed quantum protocols for secure identification and oblivious transfer in the bounded-quantum-storage model satisfy our security definition, and thus compose in the above sense.

Serge Fehr, Christian Schaffner
LEGO for Two-Party Secure Computation

This paper continues the recent line of work of making Yao’s garbled circuit approach to two-party computation secure against an active adversary. We propose a new cut-and-choose based approach called LEGO (Large Efficient Garbled-circuit Optimization): It is specifically aimed at large circuits. Asymptotically it obtains a factor

$\log\vert\mathcal{C}\vert$

improvement in computation and communication over previous cut-and-choose based solutions, where

$\vert\mathcal{C}\vert$

is the size of the circuit being computed. The protocol is universally composable (UC) in the OT-hybrid model against a static, active adversary.

Jesper Buus Nielsen, Claudio Orlandi
Simple, Black-Box Constructions of Adaptively Secure Protocols

We present a compiler for transforming an oblivious transfer (OT) protocol secure against an adaptive semi-honest adversary into one that is secure against an adaptive malicious adversary. Our compiler achieves security in the universal composability framework, assuming access to an ideal commitment functionality, and improves over previous work achieving the same security guarantee in two ways: it uses black-box access to the underlying protocol and achieves a constant multiplicative overhead in the round complexity. As a corollary, we obtain the first constructions of adaptively secure protocols in the stand-alone model using black-box access to a low-level primitive.

Seung Geol Choi, Dana Dachman-Soled, Tal Malkin, Hoeteck Wee
Black-Box Constructions of Two-Party Protocols from One-Way Functions

We exhibit constructions of the following two-party cryptographic protocols given only black-box access to a one-way function:

constant-round zero-knowledge arguments (of knowledge) for any language in

NP

;

constant-round trapdoor commitment schemes;

constant-round parallel coin-tossing.

Previous constructions either require stronger computational assumptions (e.g. collision-resistant hash functions), non-black-box access to a one-way function, or a super-constant number of rounds. As an immediate corollary, we obtain a constant-round black-box construction of secure two-party computation protocols starting from only semi-honest oblivious transfer. In addition, by combining our techniques with recent constructions of concurrent zero-knowledge and non-malleable primitives, we obtain black-box constructions of concurrent zero-knowledge arguments for

NP

and non-malleable commitments starting from only one-way functions.

Rafael Pass, Hoeteck Wee
Chosen-Ciphertext Security via Correlated Products

We initiate the study of one-wayness under

correlated products

. We are interested in identifying necessary and sufficient conditions for a function

f

and a distribution on inputs (

x

1

, ...,

x

k

), so that the function (

f

(

x

1

), ...,

f

(

x

k

)) is one-way. The main motivation of this study is the construction of public-key encryption schemes that are secure against chosen-ciphertext attacks (CCA). We show that any collection of injective trapdoor functions that is secure under a very natural correlated product can be used to construct a CCA-secure encryption scheme. The construction is simple, black-box, and admits a direct proof of security.

We provide evidence that security under correlated products is achievable by demonstrating that lossy trapdoor functions (Peikert and Waters, STOC ’08) yield injective trapdoor functions that are secure under the above mentioned correlated product. Although we currently base security under correlated products on existing constructions of lossy trapdoor functions, we argue that the former notion is potentially weaker as a general assumption. Specifically, there is no fully-black-box construction of lossy trapdoor functions from trapdoor functions that are secure under correlated products.

Alon Rosen, Gil Segev
Hierarchical Identity Based Encryption with Polynomially Many Levels

We present the first hierarchical identity based encryption (HIBE) system that has full security for more than a constant number of levels. In all prior HIBE systems in the literature, the security reductions suffered from exponential degradation in the depth of the hierarchy, so these systems were only proven fully secure for identity hierarchies of constant depth. (For deep hierarchies, previous work could only prove the weaker notion of selective-ID security.) In contrast, we offer a tight proof of security, regardless of the number of levels; hence our system is secure for polynomially many levels.

Our result can very roughly be viewed as an application of Boyen’s framework for constructing HIBE systems from exponent-inversion IBE systems to a (dramatically souped-up) version of Gentry’s IBE system, which has a tight reduction. In more detail, we first describe a generic transformation from “identity based broadcast encryption with key randomization” (KR-IBBE) to a HIBE, and then construct KR-IBBE by modifying a recent construction of IBBE of Gentry and Waters, which is itself an extension of Gentry’s IBE system. Our hardness assumption is similar to that underlying Gentry’s IBE system.

Craig Gentry, Shai Halevi
Predicate Privacy in Encryption Systems

Predicate encryption

is a new encryption paradigm which gives a master secret key owner fine-grained control over access to encrypted data. The master secret key owner can generate secret key tokens corresponding to predicates. An encryption of data

x

can be evaluated using a secret token corresponding to a predicate

f

; the user learns whether the data satisfies the predicate, i.e., whether

f

(

x

) = 1.

Prior work on public-key predicate encryption has focused on the notion of data or plaintext privacy, the property that ciphertexts reveal no information about the encrypted data to an attacker other than what is inherently revealed by the tokens the attacker possesses. In this paper, we consider a new notion called

predicate privacy

, the property that tokens reveal no information about the encoded query predicate. Predicate privacy is inherently impossible to achieve in the public-key setting and has therefore received little attention in prior work. In this work, we consider predicate encryption in the symmetric-key setting and present a symmetric-key predicate encryption scheme which supports inner product queries. We prove that our scheme achieves both plaintext privacy and predicate privacy.

Emily Shen, Elaine Shi, Brent Waters
Simultaneous Hardcore Bits and Cryptography against Memory Attacks

This paper considers two questions in cryptography.

Cryptography Secure Against Memory Attacks.

A particularly devastating side-channel attack against cryptosystems, termed the “memory attack”, was proposed recently. In this attack, a significant fraction of the bits of a secret key of a cryptographic algorithm can be measured by an adversary if the secret key is ever

stored

in a part of memory which can be accessed even after power has been turned off for a short amount of time. Such an attack has been shown to completely compromise the security of various cryptosystems in use, including the RSA cryptosystem and AES.

We show that the public-key encryption scheme of Regev (STOC 2005), and the identity-based encryption scheme of Gentry, Peikert and Vaikuntanathan (STOC 2008) are remarkably robust against memory attacks where the adversary can measure a large fraction of the bits of the secret-key, or more generally, can compute an arbitrary function of the secret-key of bounded output length. This is done without increasing the size of the secret-key, and without introducing any complication of the natural encryption and decryption routines.

Simultaneous Hardcore Bits.

We say that a block of bits of

x

are

simultaneously hard-core

for a one-way function

f

(

x

), if given

f

(

x

) they cannot be distinguished from a random string of the same length. Although any candidate one-way function can be shown to hide one hardcore bit and even a logarithmic number of simultaneously hardcore bits, there are few examples of one-way or trapdoor functions for which a linear number of the input bits have been proved simultaneously hardcore; the ones that are known relate the simultaneous security to the difficulty of factoring integers.

We show that for a lattice-based (injective) trapdoor function which is a variant of function proposed earlier by Gentry, Peikert and Vaikuntanathan, an

N

 − 

o

(

N

) number of input bits are simultaneously hardcore, where

N

is the total length of the input.

These two results rely on similar proof techniques.

Adi Akavia, Shafi Goldwasser, Vinod Vaikuntanathan
The Differential Privacy Frontier (Extended Abstract)

We review the definition of

differential privacy

and briefly survey a handful of very recent contributions to the differential privacy frontier.

Cynthia Dwork
How Efficient Can Memory Checking Be?

We consider the problem of memory checking, where a user wants to maintain a large database on a remote server but has only limited local storage. The user wants to use the small (but trusted and secret) local storage to detect faults in the large (but public and untrusted) remote storage. A memory checker receives from the user store and retrieve operations to the large database. The checker makes its own requests to the (untrusted) remote storage and receives answers to these requests. It then uses these responses, together with its small private and reliable local memory, to ascertain that all requests were answered correctly, or to report faults in the remote storage (the public memory).

A fruitful line of research investigates the complexity of memory checking in terms of the number of queries the checker issues per user request (query complexity) and the size of the reliable local memory (space complexity). Blum et al., who first formalized the question, distinguished between online checkers (that report faults as soon as they occur) and offline checkers (that report faults only at the end of a long sequence of operations). In this work we revisit the question of memory checking, asking

how efficient can memory checking be?

For online checkers, Blum et al. provided a checker with logarithmic query complexity in

n

, the database size. Our main result is a lower bound: we show that for checkers that access the remote storage in a deterministic and non-adaptive manner (as do all known memory checkers), their query complexity must be at least

Ω

(log

n

/loglog

n

). To cope with this negative result, we show how to trade off the read and write complexity of online memory checkers: for any desired logarithm base

d

, we construct an online checker where either reading

or

writing is inexpensive and has query complexity

O

(log

d

n

). The price for this is that the other operation (write or read respectively) has query complexity

O

(

d

·log

d

n

). Finally, if even this performance is unacceptable,

offline

memory checking may be an inexpensive alternative. We provide a scheme with

O

(1) amortized query complexity, improving Blum et al.’s construction, which only had such performance for long sequences of at least

n

operations.

Cynthia Dwork, Moni Naor, Guy N. Rothblum, Vinod Vaikuntanathan
Goldreich’s One-Way Function Candidate and Myopic Backtracking Algorithms

Goldreich (ECCC 2000) proposed a candidate one-way function construction which is parameterized by the choice of a small predicate (over

d

 = 

O

(1) variables) and of a bipartite expanding graph of right-degree

d

. The function is computed by labeling the

n

vertices on the left with the bits of the input, labeling each of the

n

vertices on the right with the value of the predicate applied to the neighbors, and outputting the

n

-bit string of labels of the vertices on the right.

Inverting Goldreich’s one-way function is equivalent to finding solutions to a certain constraint satisfaction problem (which easily reduces to SAT) having a “planted solution,” and so the use of SAT solvers constitutes a natural class of attacks.

We perform an experimental analysis using MiniSat, which is one of the best publicly available algorithms for SAT. Our experiment shows that the running time required to invert the function grows exponentially with the length of the input, and that such an attack becomes infeasible already with small input length (a few hundred bits).

Motivated by these encouraging experiments, we initiate a rigorous study of the limitations of back-tracking based SAT solvers as attacks against Goldreich’s function. Results by Alekhnovich, Hirsch and Itsykson imply that Goldreich’s function is secure against “myopic” backtracking algorithms (an interesting subclass) if the 3-ary parity predicate

P

(

x

1

,

x

2

,

x

3

) = 

x

1

 ⊕ 

x

2

 ⊕ 

x

3

is used. One must, however, use non-linear predicates in the construction, which otherwise succumbs to a trivial attack via Gaussian elimination.

We generalized the work of Alekhnovich et al. to handle a more general class of predicates, and we present a lower bound for the construction that uses the predicate

P

d

(

x

1

,...,

x

d

) : = 

x

1

 ⊕ 

x

2

 ⊕ ⋯ ⊕ 

x

d

 − 2

 ⊕ (

x

d

 − 1

 ∧ 

x

d

) and a random graph.

James Cook, Omid Etesami, Rachel Miller, Luca Trevisan
Secret Sharing and Non-Shannon Information Inequalities

The known secret-sharing schemes for most access structures are not efficient; even for a one-bit secret the length of the shares in the schemes is 2

O

(

n

)

, where

n

is the number of participants in the access structure. It is a long standing open problem to improve these schemes or prove that they cannot be improved. The best known lower bound is by Csirmaz (J. Cryptology 97), who proved that there exist access structures with

n

participants such that the size of the share of at least one party is

n

/log

n

times the secret size. Csirmaz’s proof uses Shannon information inequalities, which were the only information inequalities known when Csirmaz published his result. On the negative side, Csirmaz proved that by only using Shannon information inequalities one cannot prove a lower bound of

ω

(

n

) on the share size. In the last decade, a sequence of non-Shannon information inequalities were discovered. This raises the hope that these inequalities can help in improving the lower bounds beyond

n

. However, in this paper we show that all the inequalities known to date cannot prove a lower bound of

ω

(

n

) on the share size.

Amos Beimel, Ilan Orlov
Weak Verifiable Random Functions

Verifiable random functions (VRFs), introduced by Micali, Rabin and Vadhan, are pseudorandom functions in which the owner of the seed produces a public-key that constitutes a commitment to all values of the function and can then produce, for any input

x

, a proof that the function has been evaluated correctly on

x

, preserving pseudorandomness for all other inputs. No public-key (even a falsely generated one) should allow for proving more than one value per input.

VRFs are both a natural and a useful primitive, and previous works have suggested a variety of constructions and applications. Still, there are many open questions in the study of VRFs, especially their relation to more widely studied cryptographic primitives and constructing them from a wide variety of cryptographic assumptions.

In this work we define a natural relaxation of VRFs that we call weak verifiable random functions, where pseudorandomness is required to hold only for randomly selected inputs. We conduct a study of weak VRFs, focusing on applications, constructions, and their relationship to other cryptographic primitives. We show:

Constructions.

We present constructions of weak VRFs based on a variety of assumptions, including general assumptions such as (enhanced) trapdoor permutations, as well as constructions based on specific number-theoretic assumptions such as the Diffie-Hellman assumption in bilinear groups.

Separations.

Verifiable random functions (both weak and standard) cannot be constructed from one-way permutations in a black-box manner. This constitutes the first result separating (standard) VRFs from any cryptographic primitive.

Applications.

Weak VRFs capture the essence of constructing non-interactive zero-knowledge proofs for all

NP

languages.

Zvika Brakerski, Shafi Goldwasser, Guy N. Rothblum, Vinod Vaikuntanathan
Efficient Oblivious Pseudorandom Function with Applications to Adaptive OT and Secure Computation of Set Intersection

An Oblivious Pseudorandom Function (OPRF) [15] is a two-party protocol between sender

S

and receiver

R

for securely computing a pseudorandom function

f

k

(·) on key

k

contributed by

S

and input

x

contributed by

R

, in such a way that receiver

R

learns only the value

f

k

(

x

) while sender

S

learns nothing from the interaction. In other words, an OPRF protocol for PRF

f

k

(·) is a secure computation for functionality

$\mathcal F_{\mathsf{OPRF}}:(k,x)\rightarrow(\perp,f_k(x))$

.

We propose an OPRF protocol on committed inputs which requires only

O

(1) modular exponentiations, and has a constant number of communication rounds (two in ROM). Our protocol is secure in the CRS model under the Composite Decisional Residuosity (CDR) assumption, while the PRF itself is secure on a polynomially-sized domain under the Decisional

q

-Diffie-Hellman Inversion assumption on a group of composite order, where

q

is the size of the PRF domain, and it has a useful feature that

f

k

is an injection for every

k

.

practical OPRF protocol for an injective PRF, even limited to a polynomially-sized domain, is a versatile tool with many uses in secure protocol design. We show that our OPRF implies a new practical fully-simulatable adaptive (and committed) OT protocol secure without ROM. In another example, this oblivious PRF construction implies the first secure computation protocol of set intersection on committed data with computational cost of

O

(

N

) exponentiations where

N

is the maximum size of both data sets.

Stanisław Jarecki, Xiaomin Liu
Towards a Theory of Extractable Functions

Extractable functions are functions where any adversary that outputs a point in the range of the function is guaranteed to “know” a corresponding preimage. Here, knowledge is captured by the existence of an efficient extractor that recovers the preimage from

the internal state of the adversary

. Extractability of functions was defined by the authors (ICALP’08) in the context of perfectly one-way functions. It can be regarded as an abstraction from specific knowledge assumptions, such as the Knowledge of Exponent assumption (Hada and Tanaka, Crypto 1998).

We initiate a more general study of extractable functions. We explore two different approaches. The first approach is aimed at understanding the concept of extractability in of itself; in particular we demonstrate that a weak notion of extraction implies a strong one, and make rigorous the intuition that extraction and obfuscation are complementary notions.

In the second approach, we study the possibility of constructing cryptographic primitives from simpler or weaker ones while maintaining extractability. Results are generally positive. Specifically, we show that several cryptographic reductions are either “knowledge-preserving” or can be modified to be so. Examples include reductions from extractable weak one-way functions to extractable strong ones, from extractable pseudorandom generators to extractable pseudorandom functions, and from extractable one-way functions to extractable commitments. Other questions, such as constructing extractable pseudorandom generators from extractable one way functions, remain open.

Ran Canetti, Ronny Ramzi Dakdouk
Erratum to: Theory of Cryptography

Erratum to: O. Reingold (Ed.) Theory of Cryptography DOI: 10.1007/978-3-642-00457-5The book was inadvertently published with an incorrect name of the copyright holder. The name of the copyright holder for this book is: © Springer-Verlag Berlin Heidelberg. The book has been updated with the changes.

Omer Reingold
Backmatter
Metadaten
Titel
Theory of Cryptography
herausgegeben von
Omer Reingold
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-00457-5
Print ISBN
978-3-642-00456-8
DOI
https://doi.org/10.1007/978-3-642-00457-5