Skip to main content
main-content

Über dieses Buch

TCC 2005, the 2nd Annual Theory of Cryptography Conference, was held in Cambridge,Massachusetts,onFebruary10–12,2005.Theconferencereceived84 submissions,ofwhichtheprogramcommitteeselected32forpresentation.These proceedings contain the revised versions of the submissions that were presented at the conference. These revisions have not been checked for correctness, and the authors bear full responsibility for the contents of their papers. The conference program also included a panel discussion on the future of theoretical cryptography and its relationship to the real world (whatever that is). It also included the traditional “rump session,” featuring short, informal talks on late-breaking research news. Much as hatters of old faced mercury-induced neurological damage as an occupational hazard, computer scientists will on rare occasion be a?icted with egocentrism, probably due to prolonged CRT exposure. Thus, you must view withpityandnotcontemptmyunalloyedelationathavingmynameonthefront cover of this LNCS volume, and my deep-seated conviction that I fully deserve the fame and riches that will surely come of it. However, having in recent years switched over to an LCD monitor, I would like to acknowledge some of the many who contributed to this conference. First thanks are due to the many researchers from all over the world who submitted their work to this conference. Lacking shrimp and chocolate-covered strawberries, TCC has to work hard to be a good conference. As a community, I think we have.

Inhaltsverzeichnis

Frontmatter

Hardness Amplification and Error Correction

Optimal Error Correction Against Computationally Bounded Noise

For computationally bounded adversarial models of error, we construct appealingly simple, efficient, cryptographic encoding and unique decoding schemes whose error-correction capability is much greater than classically possible. In particular:

1

For binary alphabets, we construct positive-rate coding schemes which are uniquely decodable from a 1/2 – γ

error rate for any constant γ

>

0.

2

For large alphabets, we construct coding schemes which are uniquely decodable from a

$1 - \sqrt{R}$

error rate for any information rate R

>

0.

Our results are qualitatively stronger than related work: the construction works in the public-key model (requiring no shared secret key or joint local state) and allows the channel to know everything that the receiver knows. In addition, our techniques can potentially be used to construct coding schemes that have information rates approaching the Shannon limit. Finally, our construction is qualitatively optimal: we show that unique decoding under high error rates is

impossible

in several natural relaxations of our model.

Silvio Micali, Chris Peikert, Madhu Sudan, David A. Wilson

Hardness Amplification of Weakly Verifiable Puzzles

Is it harder to solve many puzzles than it is to solve just one? This question has different answers, depending on how you define puzzles. For the case of inverting one-way functions it was shown by Yao that solving many independent instances simultaneously is indeed harder than solving a single instance (cf. the transformation from weak to strong one-way functions). The known proofs of that result, however, use in an essential way the fact that for one-way functions, verifying candidate solutions to a given puzzle is easy. We extend this result to the case where solutions are efficiently verifiable

only by the party that generated the puzzle.

We call such puzzles

weakly verifiable

. That is, for weakly verifiable puzzles we show that if no efficient algorithm can solve a single puzzle with probability more than

ε

, then no efficient algorithm can solve

n

independent puzzles simultaneously with probability more than

ε

n

. We also demonstrate that when the puzzles are not even weakly verifiable, solving many puzzles may be no harder than solving a single one.

Hardness amplification of weakly verifiable puzzles turns out to be closely related to the reduction of soundness error under parallel repetition in computationally sound arguments. Indeed, the proof of Bellare, Impagliazzo and Naor that parallel repetition reduces soundness error in three-round argument systems implies a result similar to our first result, albeit with considerably worse parameters. Also, our second result is an adaptation of their proof that parallel repetition of four-round systems may not reduce the soundness error.

Ran Canetti, Shai Halevi, Michael Steiner

On Hardness Amplification of One-Way Functions

We continue the study of the efficiency of black-box reductions in cryptography. We focus on the question of constructing strong one-way functions (respectively, permutations) from weak one-way functions (respectively, permutations). To make our impossibility results stronger, we focus on the weakest type of constructions: those that start from a weak one-way permutation and define a strong one-way function.

We show that for every “fully black-box” construction of a

ε

(

n

)-secure function based on a (1–

δ

(

n

))-secure permutation, if

q

(

n

) is the number of oracle queries used in the construction and ℓ(

n

) is the input length of the new function, then we have

$q \geq {\it \Omega}(\frac {1}{\delta}\cdot {\rm log}\frac {1}{\epsilon})$

and ℓ ≥

n

+

${\it \Omega}$

(log 1/

ε

) –

O

(log

q

). This result is proved by showing that fully black-box reductions of strong to weak one-way functions imply the existence of “hitters” and then by applying known lower bounds for hitters. We also show a sort of reverse connection, and we revisit the construction of Goldreich et al. (FOCS 1990) in terms of this reverse connection.

Finally, we prove that any “weakly black-box” construction with parameters

q

(

n

) and ℓ(

n

) better than the above lower bounds implies the unconditional existence of strong one-way functions (and, therefore, the existence of a weakly black-box construction with

q

(

n

)=0). This result, like the one for fully black-box reductions, is proved by reasoning about the function defined by such a construction when using the identity permutation as an oracle.

Henry Lin, Luca Trevisan, Hoeteck Wee

Graphs and Groups

Cryptography in Subgroups of $\mathbb{Z}_{n}^{*}$

We demonstrate the cryptographic usefulness of a small subgroup of

$\mathbb{Z}_{n}^{*}$

of hidden order. Cryptographic schemes for integer commitment and digital signatures have been suggested over large subgroups of

$\mathbb{Z}_{n}^{*}$

, by reducing the order of the groups we obtain quite similar but more efficient schemes. The underlying cryptographic assumption resembles the strong RSA assumption.

We analyze a signature scheme known to be secure against known message attack and prove that it is secure against adaptive chosen message attack. This result does not necessarily rely on the use of a small subgroup, but the small subgroup can make the security reduction tighter.

We also investigate the case where

$\mathbb{Z}_{n}^{*}$

has semi-smooth order. Using a new decisional assumption, related to high residuosity assumptions, we suggest a homomorphic public-key cryptosystem.

Jens Groth

Efficiently Constructible Huge Graphs That Preserve First Order Properties of Random Graphs

We construct efficiently computable sequences of random-looking graphs that preserve properties of the canonical random graphs

G

(2

n

,

p

(

n

)). We focus on first-order graph properties, namely properties that can be expressed by a formula

φ

in the language where variables stand for vertices and the only relations are equality and adjacency (e.g. having an isolated vertex is a first-order property ∃ 

x

 ∀ 

y

(¬EDGE(

x

,

y

))). Random graphs are known to have remarkable structure w.r.t. first order properties, as indicated by the following 0/1 law: for a variety of choices of

p

(

n

), any

fixed

first-order property

φ

holds for

G

(2

n

,

p

(

n

)) with probability tending either to 0 or to 1 as

n

grows to infinity.

We first observe that similar 0/1 laws are satisfied by

G

(2

n

,

p

(

n

)) even w.r.t. sequences of formulas {

φ

n

}

n

 ∈ ℕ with bounded quantifier depth,

${\it depth}(\phi_{n}) \leq {\frac {1}{{\rm lg} (1/p(n))}}$

. We also demonstrate that 0/1 laws do not hold for random graphs w.r.t. properties of significantly larger quantifier depth. For most choices of

p

(

n

), we present efficient constructions of huge graphs with edge density nearly

p

(

n

) that emulate

G

(2

n

,

p

(

n

)) by satisfying

${\it \Theta} ({\frac {1}{{\rm lg} (1/p(n))}})-0/1$

laws. We show both probabilistic constructions (which also have other properties such as

K

-wise independence and being computationally indistinguishable from

G

(

N

,

p

(

n

)) ), and deterministic constructions where for each graph size we provide a specific graph that captures the properties of

G

(2

n

,

p

(

n

)) for slightly smaller quantifier depths.

Moni Naor, Asaf Nussboim, Eran Tromer

Simulation and Secure Computation

Comparing Two Notions of Simulatability

In this work, relations between the security notions

standard simulatability

and

universal simulatability

for cryptographic protocols are investigated.

A simulatability-based notion of security considers a protocol

π

as secure as an idealization

τ

of the protocol task, if and only if every attack on

π

can be simulated by an attack on

τ

.

Two formalizations, which both provide secure composition of protocols, are common:

standard simulatability

means that for every

π

-attack and protocol user

H

, there is a

τ

-attack, such that

H

cannot distinguish

π

from

τ

.

Universal simulatability

means that for every

π

-attack, there is a

τ

-attack, such that no protocol user

H

can distinguish

π

from

τ

.

Trivially, universal simulatability implies standard simulatability. We show: the converse is true with respect to perfect security, but not with respect to computational or statistical security.

Besides, we give a formal definition of a

time-lock puzzle

, which may be of independent interest. Although the described results do not depend on any computational assumption, we show that the existence of a time-lock puzzle gives an even stronger separation of standard and universal simulatability with respect to computational security.

Dennis Hofheinz, Dominique Unruh

Relaxing Environmental Security: Monitored Functionalities and Client-Server Computation

Definition of security under the framework of Environmental Security (a.k.a Network-Aware Security or Universally Composable Security) typically requires “extractability” of the private inputs of parties running a protocol. Formalizing concepts that appeared in an earlier work [19], we introduce a framework of “Monitored Functionalities,” which allows us to avoid such a requirement from the security definition, while still providing very strong composition properties. We also consider a specialization of the Environmental Security framework by designating one party as a “server” and all other parties as clients. Both these contributions in the work are aimed at being able to provide weaker Environmental Security guarantees to simpler protocols. We illustrate the usability of the Monitored Functionalities framework by providing much simpler protocols in the plain model than in [19] for some limited functionalities in the server-client model.

Manoj Prabhakaran, Amit Sahai

Handling Expected Polynomial-Time Strategies in Simulation-Based Security Proofs

The standard class of adversaries considered in cryptography is that of

strict

polynomial-time probabilistic machines (or circuits). However,

expected

polynomial-time machines are often also considered. For example, there are many zero-knowledge protocols for which the only simulation techniques known run in expected (and not strict) polynomial-time. In addition, it has been shown that expected polynomial-time simulation is

essential

for achieving constant-round black-box zero-knowledge protocols. This reliance on expected polynomial-time simulation introduces a number of conceptual and technical difficulties. In this paper, we develop techniques for dealing with expected polynomial-time adversaries in the context of simulation-based security proofs.

Jonathan Katz, Yehuda Lindell

Security of Encryption

Adaptively-Secure, Non-interactive Public-Key Encryption

Adaptively-secure encryption schemes ensure secrecy even in the presence of an adversary who can corrupt parties in an adaptive manner based on public keys, ciphertexts, and secret data of already-corrupted parties. Ideally, an adaptively-secure encryption scheme should, like standard public-key encryption, allow arbitrarily-many parties to use a single encryption key to securely encrypt arbitrarily-many messages to a given receiver who maintains only a single short decryption key. However, it is known that these requirements are impossible to achieve: no non-interactive encryption scheme that supports encryption of an unbounded number of messages and uses a single, unchanging decryption key can be adaptively secure. Impossibility holds even if secure data erasure is possible.

We show that this limitation can be overcome by

updating the decryption key over time

and making some mild assumptions about the frequency of communication between parties. Using this approach, we construct adaptively-secure, completely non-interactive encryption schemes supporting secure encryption of arbitrarily-many messages from arbitrarily-many senders. Our schemes additionally provide forward security and security against chosen-ciphertext attacks.

Ran Canetti, Shai Halevi, Jonathan Katz

Adaptive Security of Symbolic Encryption

We prove a computational soundness theorem for the symbolic analysis of cryptographic protocols which extends an analogous theorem of Abadi and Rogaway (J. of Cryptology 15(2):103–127, 2002) to a scenario where the adversary gets to see the encryption of a sequence of adaptively chosen symbolic expressions. The extension of the theorem of Abadi and Rogaway to such an adaptive scenario is nontrivial, and raises issues related to the classic problem of selective decommitment, which do not appear in the original formulation of the theorem.

Although the theorem of Abadi and Rogaway applies only to passive adversaries, our extension to adaptive attacks makes it substantially stronger, and powerful enough to analyze the security of cryptographic protocols of practical interest. We exemplify the use of our soundness theorem in the analysis of group key distribution protocols like those that arise in multicast and broadcast applications. Specifically, we provide cryptographic definitions of security for multicast key distribution protocols both in the symbolic as well as the computational framework and use our theorem to prove soundness of the symbolic definition.

Daniele Micciancio, Saurabh Panjwani

Chosen-Ciphertext Security of Multiple Encryption

Encryption of data using multiple, independent encryption schemes (“multiple encryption”) has been suggested in a variety of contexts, and can be used, for example, to protect against partial key exposure or cryptanalysis, or to enforce threshold access to data. Most prior work on this subject has focused on the security of multiple encryption against

chosen-plaintext attacks

, and has shown constructions secure in this sense based on the chosen-plaintext security of the component schemes. Subsequent work has sometimes assumed that these solutions are also secure against

chosen-ciphertext attacks

when component schemes with stronger security properties are used. Unfortunately, this intuition is false for all existing multiple encryption schemes.

Here, in addition to formalizing the problem of chosen-ciphertext security for multiple encryption, we give simple, efficient, and generic constructions of multiple encryption schemes secure against chosen-ciphertext attacks (based on

any

component schemes secure against such attacks) in the standard model. We also give a more efficient construction from any (hierarchical) identity-based encryption scheme secure against selective-identity

chosen plaintext

attacks. Finally, we discuss a wide range of applications for our proposed schemes.

Yevgeniy Dodis, Jonathan Katz

Steganography and Zero Knowledge

Public-Key Steganography with Active Attacks

A complexity-theoretic model for public-key steganography with active attacks is introduced. The notion of

steganographic security against adaptive chosen-covertext attacks (SS-CCA)

and a relaxation called

steganographic security against publicly-detectable replayable adaptive chosen-covertext attacks (SS-PDR-CCA)

are formalized. These notions are closely related to

CCA-security

and

PDR-CCA-security

for public-key cryptosystems. In particular, it is shown that any SS-(PDR-)CCA stegosystem is a (PDR-)CCA-secure public-key cryptosystem and that an SS-PDR-CCA stegosystem for any covertext distribution with sufficiently large min-entropy can be realized from any PDR-CCA-secure public-key cryptosystem with pseudorandom ciphertexts.

Michael Backes, Christian Cachin

Upper and Lower Bounds on Black-Box Steganography

We study the limitations of steganography when the sender is not using any properties of the underlying channel beyond its entropy and the ability to sample from it. On the negative side, we show that the number of samples the sender must obtain from the channel is exponential in the rate of the stegosystem. On the positive side, we present the first secret-key stegosystem that essentially matches this lower bound regardless of the entropy of the underlying channel. Furthermore, for high-entropy channels, we present the first secret-key stegosystem that matches this lower bound

statelessly

(i.e., without requiring synchronized state between sender and receiver).

Nenad Dedić, Gene Itkis, Leonid Reyzin, Scott Russell

Fair-Zero Knowledge

We introduce

Fair Zero-Knowledge

, a multi-verifier ZK system where every proof is guaranteed to be “zero-knowledge for all verifiers.” That is, if an honest verifier accepts a fair zero-knowledge proof, then he is assured that all other verifiers also learn nothing more than the verity of the statement in question, even if they maliciously collude with a cheating prover.

We construct Fair Zero-Knowledge systems based on standard complexity assumptions (specifically, the quadratic residuosity assumption) and an initial, one-time use of a physically secure communication channel (specifically, each verifier sends the prover a private message in an envelope).

A

ll other communication occurs (and must occur) on a broadcast channel.

The main technical challenge of our construction consists of provably removing any possibility of using

steganography

in a ZK proof. To overcome this technical difficulty, we introduce tools —such as Unique Zero Knowledge— that may be of independent interest.

Matt Lepinski, Silvio Micali, Abhi Shelat

Secure Computation I

How to Securely Outsource Cryptographic Computations

We address the problem of using untrusted (potentially malicious) cryptographic helpers. We provide a formal security definition for

securely outsourcing

computations from a computationally limited device to an untrusted helper. In our model, the adversarial environment writes the software for the helper, but then does not have direct communication with it once the device starts relying on it. In addition to security, we also provide a framework for quantifying the

efficiency

and

checkability

of an outsourcing implementation. We present two practical outsource-secure schemes. Specifically, we show how to securely outsource modular exponentiation, which presents the computational bottleneck in most public-key cryptography on computationally limited devices. Without outsourcing, a device would need

O

(

n

) modular multiplications to carry out modular exponentiation for

n

-bit exponents. The load reduces to

O

(log

2

n

) for any exponentiation-based scheme where the honest device may use two untrusted exponentiation programs; we highlight the Cramer-Shoup cryptosystem [13] and Schnorr signatures [28] as examples. With a relaxed notion of security, we achieve the same load reduction for a new CCA2-secure encryption scheme using only one untrusted Cramer-Shoup encryption program.

Susan Hohenberger, Anna Lysyanskaya

Secure Computation of the Mean and Related Statistics

In recent years there has been massive progress in the development of technologies for storing and processing of data. If statistical analysis could be applied to such data when it is distributed between several organisations, there could be huge benefits. Unfortunately, in many cases, for legal or commercial reasons, this is not possible.

The idea of using the theory of multi-party computation to analyse efficient algorithms for

privacy preserving data-mining

was proposed by Pinkas and Lindell. The point is that algorithms developed in this way can be used to overcome the apparent impasse described above: the owners of data can, in effect, pool their data while ensuring that privacy is maintained.

Motivated by this, we describe how to securely compute the mean of an attribute value in a database that is shared between two parties. We also demonstrate that existing solutions in the literature that could be used to do this leak information, therefore underlining the importance of applying rigorous theoretical analysis rather than settling for ad hoc techniques.

Eike Kiltz, Gregor Leander, John Malone-Lee

Keyword Search and Oblivious Pseudorandom Functions

We study the problem of privacy-preserving access to a database. Particularly, we consider the problem of privacy-preserving keyword search (KS), where records in the database are accessed according to their associated keywords and where we care for the privacy of both the client and the server. We provide efficient solutions for various settings of KS, based either on specific assumptions or on general primitives (mainly oblivious transfer). Our general solutions rely on a new connection between KS and the oblivious evaluation of pseudorandom functions (OPRFs). We therefore study both the definition and construction of OPRFs and, as a corollary, give improved constructions of OPRFs that may be of independent interest.

Michael J. Freedman, Yuval Ishai, Benny Pinkas, Omer Reingold

Secure Computation II

Evaluating 2-DNF Formulas on Ciphertexts

Let

ψ

be a 2-DNF formula on boolean variables

x

1

,...,

x

n

∈ {0,1}. We present a homomorphic public key encryption scheme that allows the public evaluation of

ψ

given an encryption of the variables

x

1

,...,

x

n

. In other words, given the encryption of the bits

x

1

,...,

x

n

, anyone can create the encryption of

ψ

(

x

1

,...,

x

n

). More generally, we can evaluate

quadratic

multi-variate polynomials on ciphertexts provided the resulting value falls within a small set. We present a number of applications of the system:

1

In a database of size

n

, the total communication in the basic step of the Kushilevitz-Ostrovsky PIR protocol is reduced from

$\sqrt{n}$

to

$\sqrt[3]{n}$

.

2

An efficient election system based on homomorphic encryption where voters do not need to include non-interactive zero knowledge proofs that their ballots are valid. The election system is proved secure without random oracles but still efficient.

3

A protocol for universally verifiable computation.

Dan Boneh, Eu-Jin Goh, Kobbi Nissim

Share Conversion, Pseudorandom Secret-Sharing and Applications to Secure Computation

We present a method for converting shares of a secret into shares of the same secret in a different secret-sharing scheme using only local computation and no communication between players. In particular, shares in a replicated scheme based on a CNF representation of the access structure can be converted into shares from any linear scheme for the same structure.

We show how this can be combined with any pseudorandom function to create, from initially distributed randomness, any number of Shamir secret-sharings of (pseudo)random values without communication. We apply this technique to obtain efficient non-interactiveprotocols for secure computation of low-degree polynomials, which in turn give rise to other applications in secure computation and threshold cryptography. For instance, we can make the Cramer-Shoup threshold cryptosystem by Canetti and Goldwasser fully non-interactive, or construct non-interactive threshold signature schemes secure without random oracles.

The latter solutions are practical only for a relatively small number of players. However, in our main applications the number of players is typically small, and furthermore it can be argued that no solution that makes a black-box use of a pseudorandom function can be more efficient.

Ronald Cramer, Ivan Damgård, Yuval Ishai

Toward Privacy in Public Databases

We initiate a theoretical study of the

census problem

. Informally, in a census individual respondents give private information to a trusted party (the census bureau), who publishes a

sanitized

version of the data. There are two fundamentally conflicting requirements:

privacy

for the respondents and

utility

of the sanitized data. Unlike in the study of secure function evaluation, in which privacy is preserved to the extent possible given a specific functionality goal, in the census problem privacy is paramount; intuitively, things that cannot be learned “safely” should not be learned at all.

An important contribution of this work is a definition of privacy (and privacy compromise) for statistical databases, together with a method for describing and comparing the privacy offered by specific sanitization techniques. We obtain several privacy results using two different sanitization techniques, and then show how to combine them via cross training. We also obtain two utility results involving clustering.

Shuchi Chawla, Cynthia Dwork, Frank McSherry, Adam Smith, Hoeteck Wee

Quantum Cryptography and Universal Composability

The Universal Composable Security of Quantum Key Distribution

The existing unconditional security definitions of quantum key distribution (QKD) do not apply to joint attacks over QKD and the subsequent use of the resulting key. In this paper, we close this potential security gap by using a universal composability theorem for the quantum setting. We first derive a composable security definition for QKD. We then prove that the usual security definition of QKD still implies the composable security definition. Thus, a key produced in any QKD protocol that is unconditionally secure in the usual definition can indeed be safely used, a property of QKD that is hitherto unproven. We propose two other useful sufficient conditions for composability. As a simple application of our result, we show that keys generated by repeated runs of QKD degrade slowly.

Michael Ben-Or, Michał Horodecki, Debbie W. Leung, Dominic Mayers, Jonathan Oppenheim

Universally Composable Privacy Amplification Against Quantum Adversaries

Privacy amplification is the art of shrinking a partially secret string

Z

to a highly secret key

S

. We show that, even if an adversary holds quantum information about the initial string

Z

, the key

S

obtained by two-universal hashing is secure, according to a universally composable security definition. Additionally, we give an asymptotically optimal lower bound on the length of the extractable key

S

in terms of the adversary’s (quantum) knowledge about

Z

. Our result has applications in quantum cryptography. In particular, it implies that many of the known quantum key distribution protocols are universally composable.

Renato Renner, Robert König

A Universally Composable Secure Channel Based on the KEM-DEM Framework

For ISO standards on public-key encryption, Shoup introduced the framework of KEM (Key Encapsulation Mechanism), and DEM (Data Encapsulation Mechanism), for formalizing and realizing

one-directional

hybrid encryption; KEM is a formalization of asymmetric encryption specified for key distribution, and DEM is a formalization of symmetric encryption. This paper investigates a more general hybrid protocol,

secure channel

, using KEM and DEM, such that KEM is used for distribution of a session key and DEM, along with the session key, is used for multiple

bi-directional

encrypted transactions in a session. This paper shows that KEM semantically secure against adaptively chosen ciphertext attacks (IND-CCA2) and DEM semantically secure against adaptively chosen plaintext/ciphertext attacks (IND-P2-C2) along with secure signatures and ideal certification authority are sufficient to realize a

universally composable

(UC) secure channel. To obtain the main result, this paper also shows several equivalence results: UC KEM, IND-CCA2 KEM and NM-CCA2 (non-malleable against CCA2) KEM are equivalent, and UC DEM, IND-P2-C2 DEM and NM-P2-C2 DEM are equivalent.

Waka Nagao, Yoshifumi Manabe, Tatsuaki Okamoto

Cryptographic Primitives and Security

Sufficient Conditions for Collision-Resistant Hashing

We present several new constructions of

collision-resistant hash-functions

(CRHFs) from general assumptions. We start with a simple construction of CRHF from any

homomorphic encryption

. Then, we strengthen this result by presenting constructions of CRHF from two other primitives that are implied by homomorphic-encryption: one-round

private information retrieval

(PIR) protocols and

homomorphic one-way commitments

.

Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky

The Relationship Between Password-Authenticated Key Exchange and Other Cryptographic Primitives

We consider the problem of password-authenticated key exchange (PAK) also known as session-key generation using passwords: constructing session-key generation protocols that are secure against active adversaries (person-in-the-middle) and only require the legitimate parties to share a low-entropy password (e.g. coming from a dictionary of size poly(

n

)).

We study the relationship between PAK and other cryptographic primitives. The main result of this paper is that password-authenticated key exchange and public-key encryption are

incomparable

under black-box reductions. In addition, we strengthen previous results by Halevi and Krawczyk [14] and Boyarsky [5] and show how to build key agreement and semi-honest oblivious transfer from any PAK protocol that is secure for the Goldreich-Lindell (GL) definition [11].

We highlight the difference between two existing definitions of PAK, namely the indistinguishability-based definition of Bellare, Pointcheval and Rogaway (BPR) [1] and the simulation-based definition of Goldreich and Lindell [11] by showing that there exists a PAK protocol that is secure for the BPR definition and only assumes the existence of one-way functions in the case of exponential-sized dictionaries. Hence, unlike the GL definition, the BPR definition does not imply semi-honest oblivious transfer for exponental-sized dictionaries under black-box reductions.

Minh-Huyen Nguyen

On the Relationships Between Notions of Simulation-Based Security

Several compositional forms of simulation-based security have been proposed in the literature, including universal composability, black-box simulatability, and variants thereof. These relations between a protocol and an ideal functionality are similar enough that they can be ordered from strongest to weakest according to the logical form of their definitions. However, determining whether two relations are in fact identical depends on some subtle features that have not been brought out in previous studies. We identify the position of a “master process” in the distributed system, and some limitations on transparent message forwarding within computational complexity bounds, as two main factors. Using a general computational framework, we clarify the relationships between the simulation-based security conditions.

Anupam Datta, Ralf Küsters, John C. Mitchell, Ajith Ramanathan

Encryption and Signatures

A New Cramer-Shoup Like Methodology for Group Based Provably Secure Encryption Schemes

A theoretical framework for the design of—in the sense of IND-CCA—provably secure public key cryptosystems taking non-abelian groups as a base is given. Our construction is inspired by Cramer and Shoup’s general framework for developing secure encryption schemes from certain language membership problems; thus all our proofs are in the standard model, without any idealization assumptions. The skeleton we present is conceived as a guiding tool towards the construction of secure concrete schemes from finite non-abelian groups (although it is possible to use it also in conjunction with finite abelian groups).

María Isabel González Vasco, Consuelo Martínez, Rainer Steinwandt, Jorge L. Villar

Further Simplifications in Proactive RSA Signatures

We present a new robust proactive (and threshold) RSA signature scheme secure with the optimal threshold of

t

<

n

/2 corruptions. The new scheme offers a simpler alternative to the best previously known (static) proactive RSA scheme given by Tal Rabin [36], itself a simplification over the previous schemes given by Frankel et al. [18,17]. The new scheme is conceptually simple because all the sharing and proactive re-sharing of the RSA secret key is done modulo a

prime

, while the reconstruction of the RSA signature employs an observation that the secret can be recovered from such sharing using a simple equation over the

integers

. This equation was first observed and utilized by Luo and Lu in a design of a simple and efficient proactive RSA scheme [31] which was not proven secure and which, alas, turned out to be completely

insecure

[29] due to the fact that the aforementioned equation leaks some partial information about the shared secret. Interestingly, this partial information leakage can be proven harmless once the polynomial sharing used by [31] is replaced by top-level

additive

sharing with second-level polynomial sharing for back-up.

Apart of conceptual simplicity and of new techniques of independent interests, efficiency-wise the new scheme gives a factor of 2 improvement in speed and share size in the general case, and almost a factor of 4 improvement for the common RSA public exponents 3, 17, or 65537, over the scheme of [36]

as analyzed in

[63]. However, we also present an improved security analysis and a generalization of the [36] scheme, which shows that this scheme remains secure for smaller share sizes, leading to the same factor of 2 or 4 improvements for that scheme as well.

Stanisław Jarecki, Nitesh Saxena

Proof of Plaintext Knowledge for the Ajtai-Dwork Cryptosystem

Ajtai and Dwork proposed a public-key encryption scheme in 1996 which they proved secure under the assumption that the unique shortest vector problem is hard in the worst case. This cryptosystem and its extension by Regev are the only one known for which security can be proved under a worst case assumption, and as such present a particularly interesting case to study.

In this paper, we show statistical zero-knowledge protocols for statements of the form “plaintext m corresponds to ciphertext c” and “ciphertext c and c’ decrypt to the same value” for the Ajtai-Dwork cryptosystem. We then show a interactive zero-knowledge proof of plaintext knowledge (PPK) for the Ajtai-Dwork cryptosystem, based directly on the security of the cryptosystem rather than resorting to general interactive zero-knowledge constructions. The witness for these proofs is the randomness used in the encryption.

Shafi Goldwasser, Dmitriy Kharchenko

Information Theoretic Cryptography

Entropic Security and the Encryption of High Entropy Messages

We study

entropic security

, an information-theoretic notion of security introduced by Russell and Wang [24] in the context of encryption and by Canetti et al. [5,6] in the context of hash functions. Informally, a probabilitic map

$Y = \mathcal{E}(X)$

(e.g., an encryption sheme or a hash function) is entropically secure if knowledge of

Y

does not help predicting any predicate of

X

, whenever

X

has high min-entropy from the adversary’s point of view. On one hand, we strengthen the formulation of [5,6,24] and show that entropic security in fact implies that

Y

does not help predicting any

function

of

X

(as opposed to a predicate), bringing this notion closer to the conventioonal notion of semantic security [10]. On the other hand, we also show that entropic security is equivalent to

indistinguishability

on pairs of input distributions of sufficiently high entropy, which is in turn related to

randomness extraction

from non-uniform distributions [21].

We then use the equivalence above, and the connection to randomness extraction, to prove several new results on entropically-secure encryption. First, we give two general frameworks for constructing entropically secure encryption schemes: one based on expander graphs and the other on XOR-universal hash functions. These schemes generalize the schemes of Russell and Wang, yielding simpler constructions and proofs, as well as improved parameters. To encrypt an

n

-bit message of min-entropy

t

while allowing at most

ε

-advantage to the adversary, our best schemes use a shared secret key of length

$k = n - t + 2{\rm log} (\frac {1}{\epsilon})$

. Second, we obtain lower bounds on the key length

k

for entropic security and indistinguishability. In particular, we show near tightness of our constructions:

k

>

n

t

. For a large class of schemes — including all the schemes we study — the bound can be strengthened to

$k \geq n - t+{\rm log} (\frac {1}{\epsilon})-O(1)$

.

Yevgeniy Dodis, Adam Smith

Error Correction in the Bounded Storage Model

We initiate a study of Maurer’s

bounded storage model

(

JoC

, 1992) in presence of transmission errors and perhaps other types of errors that cause different parties to have

inconsistent views of the public random source

. Such errors seem inevitable in any implementation of the model. All previous schemes and protocols in the model assume a perfectly consistent view of the public source from all parties, and do not function correctly in presence of errors, while the private-key encryption scheme of Aumann, Ding and Rabin (

IEEE IT

, 2002) can be extended to tolerate only a

O

(1/log(1/

ε

)) fraction of errors, where

ε

is an upper bound on the advantage of an adversary.

In this paper, we provide a general paradigm for constructing secure and error-resilient private-key cryptosystems in the bounded storage model that tolerate a

constant

fraction of errors, and attain the near optimal parameters achieved by Vadhan’s construction (

JoC

, 2004) in the errorless case. In particular, we show that any

local fuzzy extractor

yields a secure and error-resilient cryptosystem in the model, in analogy to the result of Lu (

JoC

, 2004) that any local strong extractor yields a secure cryptosystem in the errorless case, and construct efficient local fuzzy extractors by extending Vadhan’s sample-then-extract paradigm. The main ingredients of our constructions are

averaging samplers

(Bellare and Rompel,

FOCS ’94

),

randomness extractors

(Nisan and Zuckerman,

JCSS

, 1996),

error correcting codes

, and

fuzzy extractors

(Dodis, Reyzin and Smith,

EUROCRYPT ’04

).

Yan Zong Ding

Characterizing Ideal Weighted Threshold Secret Sharing

Weighted threshold secret sharing was introduced by Shamir in his seminal work on secret sharing. In such settings, there is a set of users where each user is assigned a positive weight. A dealer wishes to distribute a secret among those users so that a subset of users may reconstruct the secret if and only if the sum of weights of its users exceeds a certain threshold. A secret sharing scheme is ideal if the size of the domain of shares of each user is the same as the size of the domain of possible secrets (this is the smallest possible size for the domain of shares). The family of subsets authorized to reconstruct the secret in a secret sharing scheme is called an access structure. An access structure is ideal if there exists an ideal secret sharing scheme that realizes it.

It is known that some weighted threshold access structures are not ideal, while other nontrivial weighted threshold access structures do have an ideal scheme that realizes them. In this work we characterize all weighted threshold access structures that are ideal. We show that a weighted threshold access structure is ideal if and only if it is a hierarchical threshold access structure (as introduced by Simmons), or a tripartite access structure (these structures, that we introduce here, generalize the concept of bipartite access structures due to Padró and Sáez), or a composition of two ideal weighted threshold access structures that are defined on smaller sets of users. We further show that in all those cases the weighted threshold access structure may be realized by a linear ideal secret sharing scheme. The proof of our characterization relies heavily on the strong connection between ideal secret sharing schemes and matroids, as proved by Brickell and Davenport.

Amos Beimel, Tamir Tassa, Enav Weinreb

Backmatter

Weitere Informationen

Premium Partner