Skip to main content
main-content
Top

2014 | Book

Security and Cryptography for Networks

9th International Conference, SCN 2014, Amalfi, Italy, September 3-5, 2014. Proceedings

Editors: Michel Abdalla, Roberto De Prisco

Publisher: Springer International Publishing

Book Series: Lecture Notes in Computer Science

share
SHARE
insite
SEARCH

About this book

This book constitutes the proceedings of the 9th International Conference on Security and Cryptography, SCN 2014, held in Amalfi, Italy, in September 2014. The 31 papers presented in this volume were carefully reviewed and selected from 95 submissions. They are organized in topical sections on key exchange; multilinear maps and obfuscation; pseudorandom function extensions; secure computation - foundations and algorithms; network security; functional encryption; cryptanalysis; secure computation - implementation; zero knowledge; message authentication; proofs of space and erasure; public-key encryption.

Table of Contents

Frontmatter

Key Exchange

Universally Composable Non-Interactive Key Exchange

We consider the notion of a

non-interactive key exchange (NIKE)

. A NIKE scheme allows a party

A

to compute a common shared key with another party

B

from

B

’s public key and

A

’s secret key alone. This computation requires no interaction between

A

and

B

, a feature which distinguishes NIKE from regular (i.e., interactive) key exchange not only quantitatively, but also qualitatively.

Our first contribution is a formalization of NIKE protocols as ideal functionalities in the Universal Composability (UC) framework. As we will argue, existing NIKE definitions (all of which are game-based) do not support a modular analysis either of NIKE schemes themselves, or of the use of NIKE schemes. We provide a simple and natural UC-based NIKE definition that allows for a modular analysis both of NIKE schemes and their use in larger protocols.

We investigate the properties of our new definition, and in particular its relation to existing game-based NIKE definitions. We find that

(a) game-based NIKE security is equivalent to UC-based NIKE security against

static

corruptions, and

(b) UC-NIKE security against adaptive corruptions

cannot

be achieved without additional assumptions (but

can

be achieved in the random oracle model).

Our results suggest that our UC-based NIKE definition is a useful and simple abstraction of non-interactive key exchange.

Eduarda S. V. Freire, Julia Hesse, Dennis Hofheinz
Forward Secure Non-Interactive Key Exchange

Exposure of secret keys is a major concern when cryptographic protocols are implemented on weakly secure devices. Forward security is thus a way to mitigate damages when such an event occurs. In a forward-secure scheme, the public key is indeed fixed while the secret key is updated with a one-way process at regular time periods so that security of the scheme is ensured for any period prior to the exposure, since previous secret keys cannot be recovered from the corrupted one. Efficient constructions have been proposed for digital signatures or public-key encryption schemes, but none for non-interactive key exchange protocols, while the non-interactivity makes them quite vulnerable since the public information cannot evolve from an execution to another one.

In this paper we present a forward-secure non-interactive key exchange scheme with sub-linear complexity in the number of time periods. Our protocol is described using generic

leveled

multilinear maps, but we show that it is compatible with the recently introduced candidates for such maps. We also discuss various security models for this primitive and prove that our scheme fulfills them, under standard assumptions.

David Pointcheval, Olivier Sanders
Secure Key Exchange and Sessions without Credentials

Secure communication is a fundamental cryptographic primitive. Typically, security is achieved by relying on an existing credential infrastructure, such as a PKI or passwords, for identifying the end points to each other. But what can be obtained when no such credential infrastructure is available?

Clearly, when there is no pre-existing credential infrastructure, an adversary can mount successful “man in the middle” (MIM) attacks by modifying the communication between the legitimate endpoints. Still, we show that not all is lost, as long as the adversary’s control over the communication is not complete: We present relatively efficient key exchange and secure session protocols that guarantee that any MIM adversary is immediately detected

as soon as he fails to intercept even a single message

between the legitimate endpoints.

To obtain this guarantee we strengthen the notion of key exchange to require that the keys exchanged in any two sessions are independent of each other as long as each session has at least one honest endpoint,

even if both sessions has an adversarial endpoint

. We call this notion

credential-free key exchange

. We then strengthen the existing notion of secure session protocols to provide the above guarantee given a CFKE (existing definitions and constructions are insufficient for this purpose). We provide two alternative definitions and constructions of CFKE, a game-based one with a (very efficient) construction in the RO model, and a UC one with a construction in the CRS model.

Ran Canetti, Vladimir Kolesnikov, Charles Rackoff, Yevgeniy Vahlis

Multilinear Maps and Obfuscation

Relaxed Two-to-One Recoding Schemes

A

two-to-one recoding

(TOR) scheme is a new cryptographic primitive, proposed in the recent work of Gorbunov, Vaikuntanathan, and Wee (GVW), as a means to construct attribute-based encryption (ABE) schemes for all boolean circuits. GVW show that TOR schemes can be constructed assuming the hardness of the learning-with-errors (LWE) problem.

We propose a slightly weaker variant of TOR schemes called

correlation-relaxed two-to-one recoding

(CR-TOR). Unlike the TOR schemes, our weaker variant does not require an encoding function to be pseudorandom on correlated inputs. We instead replace it with an indistinguishability property that states a ciphertext is hard to decrypt without access to a certain encoding. The primary benefit of this relaxation is that it allows the construction of ABE for circuits using the TOR paradigm from a broader class of cryptographic assumptions.

We show how to construct a CR-TOR scheme from the noisy cryptographic multilinear maps of Garg, Gentry, and Halevi as well as those of Coron, Lepoint, and Tibouchi. Our framework leads to an instantiation of ABE for circuits that is conceptually different from the existing constructions.

Omkant Pandey, Kim Ramchen, Brent Waters
Obfuscation ⇒ (IND-CPA Security $\not\Rightarrow$ Circular Security)

Circular security

is an important notion for public-key encryption schemes and is needed by several cryptographic protocols. In circular security the adversary is given an extra “hint” consisting of a

cycle

of encryption of secret keys i.e.,

$\left(E_{pk_1}(sk_2),\ldots, E_{pk_n}(sk_1)\right)$

. A natural question is whether every IND-CPA encryption scheme is also circular secure. It is trivial to see that this is not the case when

n

 = 1. In 2010 a separation for

n

 = 2 was shown by [ABBC10,GH10] under standard assumptions in bilinear groups.

In this paper we finally settle the question showing that for every

n

there exists an IND-CPA secure scheme which is not

n

-circular secure.

Our result relies on the recent progress in cryptographic obfuscation.

Antonio Marcedone, Claudio Orlandi

Invited Talk I

Program Obfuscation via Multilinear Maps

Recent proposals for plausible candidate constructions of

multilinear maps

and

obfuscation

have radically transformed what we imagined to be possible in cryptography. For over a decade cryptographers had been very skeptical about the existence of such objects. In this article, we provide a very brief introduction to these results and some of their interesting consequences.

Sanjam Garg

Pseudorandom Function Extensions

Constrained Verifiable Random Functions

We extend the notion of verifiable random functions (VRF) to

constrained VRFs

, which generalize the concept of constrained pseudorandom functions, put forward by Boneh and Waters (Asiacrypt’13), and independently by Kiayias et al. (CCS’13) and Boyle et al. (PKC’14), who call them delegatable PRFs and functional PRFs, respectively. In a standard VRF the secret key

sk

allows one to evaluate a pseudorandom function at any point of its domain; in addition, it enables computation of a non-interactive proof that the function value was computed correctly. In a constrained VRF from the key

sk

one can derive constrained keys

sk

S

for subsets

S

of the domain, which allow computation of function values and proofs only at points in

S

.

After formally defining constrained VRFs, we derive instantiations from the multilinear-maps-based constrained PRFs by Boneh and Waters, yielding a VRF with constrained keys for any set that can be decided by a polynomial-size circuit. Our VRFs have the same function values as the Boneh-Waters PRFs and are proved secure under the same hardness assumption, showing that verifiability comes at no cost. Constrained (functional) VRFs were stated as an open problem by Boyle et al.

Georg Fuchsbauer
Publicly Evaluable Pseudorandom Functions and Their Applications

We put forth the notion of

publicly evaluable

pseudorandom functions (PEPRFs), which is a non-trivial extension of the standard pseudorandom functions (PRFs). Briefly, PEPRFs are defined over domain

X

containing an NP language

L

in which the witness is hard to extract on average, and each secret key

sk

is associated with a public key

pk

. For any

x

 ∈ 

L

, in addition to evaluate

F

sk

(

x

) using

sk

as in the standard PRFs, one is also able to evaluate

F

sk

(

x

) with

pk

,

x

and a witness

w

for

x

 ∈ 

L

. We conduct a formal study of PEPRFs, focusing on applications, constructions, and extensions. In more details:

We show how to construct public-key encryption scheme (PKE) from PEPRFs. The construction is simple, black-box, and admits a direct proof of security. We provide evidence that PEPRFs exist by showing generic constructions from both hash proof systems and extractable hash proof systems.

We introduce the notion of publicly samplable PRFs (PSPRFs), which is a relaxation of PEPRFs, but nonetheless implies PKE. We show PSPRFs are implied by trapdoor relations, yet the latter are further implied by trapdoor functions. This helps us to unify and clarify many PKE schemes from different paradigms and general assumptions under the notion of PSPRFs.

We propose two variants of PEPRFs. One is publicly evaluable predicate PRFs, which admit a direct construction of predicate encryption. The other is publicly evaluable and verifiable functions (PEVFs), which admit a simple construction of “hash-and-sign” signatures.

Yu Chen, Zongyang Zhang

Secure Computation – Foundations and Algorithms

On the Classification of Finite Boolean Functions up to Fairness

Two parties,

P

1

and

P

2

, wish to jointly compute some function

f

(

x

,

y

) where

P

1

only knows

x

, whereas

P

2

only knows

y

. Furthermore, and most importantly, the parties wish to reveal only what the output suggests. Function

f

is said to be computable with

complete fairness

if there exists a protocol computing

f

such that whenever one of the parties obtains the correct output, then both of them do. The only protocol known to compute functions with complete fairness is the one of Gordon et al (STOC 2008). The functions in question are finite, Boolean, and the output is shared by both parties. The classification of such functions up to fairness may be a first step towards the classification of all functionalities up to fairness. Recently, Asharov (TCC 2014) identifies two families of functions that are computable with fairness using the protocol of Gordon et al and another family for which the protocol (potentially) falls short. Surprisingly, these families account for almost all finite Boolean functions. In this paper, we expand our understanding of what can be computed fairly with the protocol of Gordon et al. In particular, we fully describe which functions the protocol computes fairly and which it (potentially) does not. Furthermore, we present a new class of functions for which fair computation is outright impossible. Finally, we confirm and expand Asharov’s observation regarding the fairness of finite Boolean functions: almost all functions

f

:

X

×

Y

 → {0,1} for which |

X

| ≠ |

Y

| are fair, whereas almost all functions for which |

X

| = |

Y

| are not.

Nikolaos Makriyannis
Communication-Efficient MPC for General Adversary Structures

A multiparty computation (MPC) protocol allows a set of players to compute a function of their inputs while keeping the inputs private and at the same time securing the correctness of the output. Most MPC protocols assume that the adversary can corrupt up to a fixed fraction of the number of players. Hirt and Maurer initiated the study of MPC under more general corruption patterns, in which the adversary is allowed to corrupt any set of players in some pre-defined collection of sets [1]. In this paper we consider this important direction and present improved communication complexity of MPC protocols for general adversary structures. More specifically, ours is the first unconditionally secure protocol that achieves linear communication in the size of Monotone Span Program representing the adversary structure in the malicious setting against any

Q2

adversary structure, whereas all previous protocols were at least cubic.

Joshua Lampkins, Rafail Ostrovsky
Publicly Auditable Secure Multi-Party Computation

In the last few years the efficiency of secure multi-party computation (MPC) increased in several orders of magnitudes. However, this alone might not be enough if we want MPC protocols to be used in practice. A crucial property that is needed in many applications is that everyone can check that a given (secure) computation was performed correctly – even in the extreme case where all the parties involved in the computation are corrupted, and even if the party who wants to verify the result was not participating. This is especially relevant in the

clients-servers

setting, where many clients provide input to a secure computation performed by a few servers. An obvious example of this is electronic voting, but also in many types of auctions one may want independent verification of the result. Traditionally, this is achieved by using non-interactive zero-knowledge proofs during the computation.

A recent trend in MPC protocols is to have a more expensive preprocessing phase followed by a very efficient online phase, e.g., the recent so-called SPDZ protocol by Damgård et al. Applications such as voting and some auctions are perfect use-case for these protocols, as the parties usually know well in advance when the computation will take place, and using those protocols allows us to use only cheap information-theoretic primitives in the actual computation. Unfortunately no protocol of the SPDZ type supports an audit phase.

In this paper, we show how to achieve efficient MPC with a public audit. We formalize the concept of

publicly auditable secure computation

and provide an enhanced version of the SPDZ protocol where, even if all the servers are corrupted, anyone with access to the transcript of the protocol can check that the output is indeed correct. Most importantly, we do so without significantly compromising the performance of SPDZ i.e. our online phase has complexity approximately twice that of SPDZ.

Carsten Baum, Ivan Damgård, Claudio Orlandi
Reducing the Overhead of MPC over a Large Population

We present a secure honest majority MPC protocol, against a static adversary, which aims to reduce the communication cost in the situation where there are a large number of parties and the number of adversarially controlled parties is relatively small. Our goal is to reduce the usage of point-to-point channels among the parties, thus enabling them to run multiple different protocol executions. Our protocol has highly efficient theoretical communication cost when compared with other protocols in the literature; specifically the circuit-dependent communication cost, for circuits of suitably large depth, is

$\mathcal{O}(|ckt|\kappa^7)$

, for security parameter

κ

and circuit size |

ckt

|. Our protocol finds application in cloud computing scenario, where the fraction of corrupted parties is relatively small. By minimizing the usage of point-to-point channels, our protocol can enable a cloud service provider to run multiple MPC protocols.

Ashish Choudhury, Arpita Patra, Nigel P. Smart

Network Security

Statistics on Password Re-use and Adaptive Strength for Financial Accounts

Multiple studies have demonstrated that users select weak passwords. However, the vast majority of studies on password security uses password lists that only have passwords for one site, which means that several important questions cannot be studied. For example, how much stronger are password choices for different categories of sites? We use a dataset which we extracted from a large dump of malware records. It contains multiple accounts (and passwords) per user and thus allows us to study both password re-use and the correlation between the value of an account and the strength of the passwords for those accounts.

The first contribution of our study shows that users in our sample choose (substantially) stronger passwords for financial accounts than for low-value accounts, based on the extracted passwords as well as publicly available lists. This contribution has implications for password research, as some widely-used lists contain passwords much weaker than those used in the real world (for accounts of more than low value). In our second contribution, we measure password re-use taking account values into account. We see that although high-value passwords are stronger, they are re-used more frequently than low-value passwords – valuable passwords are identical to 21% of the remaining passwords of a user. Before our study, little was known about password re-use for different account values.

Daniel V. Bailey, Markus Dürmuth, Christof Paar
Efficient Network-Based Enforcement of Data Access Rights

Today, databases, especially those serving/connected to the Internet need strong protection against data leakage stemming from misconfiguration, as well as from attacks, such as SQL injection.

Other insider and Advanced Persistent Threat (APT) attacks are also increasingly common threats in the security landscape.

We introduce access control list (ACL)-based policy checking and enforcement system designed specifically to prevent unauthorized (malicious or accidental) exfiltration of database records from real-life large scale systems. At the center of our approach is a trusted

small-footprint and lightweight

policy checker (e.g., implemented as a router function) that filters all outgoing traffic. We provably guarantee that only authorized data may be sent outside, and to the right recipients.

We design and formally prove security of two access control schemes, with distinct security and performance guarantees: one based on authenticated Bloom filters, and one based on either long or

short

(e.g. 16-bits long)

aggregated

MAC codes. The use of the short codes, while providing a clear performance benefit, cannot be proven secure by a simple reduction to existing aggregated MAC tools, and required careful handling and a concrete security analysis. The advantage of our schemes is that they are both simple yet

much

more efficient than the naive MAC-based access control.

Our solution requires explicit designation of each record-attribute-user tuple as permitted or disallowed. We rely on shared secret key cryptography, and our system can scale even for use by large organizations.

We implemented and deployed our algorithms in an industrial system setup. Our tests mimic usage scenarios of medium-size DB (10M records) of telephone company call records. Our experiments show that we achieve high (scalable) efficiency both in the server and checker computation, as well as extremely low communication overhead.

Paul Giura, Vladimir Kolesnikov, Aris Tentes, Yevgeniy Vahlis
EyeDecrypt — Private Interactions in Plain Sight

We introduce

EyeDecrypt

, a novel technology for privacy-preserving human-computer interaction.

EyeDecrypt

allows only authorized users to decipher data shown on a display, such as an electronic screen or plain printed material; in the former case, the authorized user can then interact with the system (

e.g.

, by pressing buttons on the screen), without revealing the details of the interaction to others who may be watching or to the system itself.

The user views the decrypted data on a closely-held personal device, such as a pair of smart glasses with a camera and heads-up display, or a smartphone. The data is displayed as an image overlay on the personal device, which we assume cannot be viewed by the adversary. The overlay is a form of augmented reality that not only allows the user to view the protected data, but also to securely enter input into the system by randomizing the input interface.

EyeDecrypt

consists of three main components: a

visualizable encryption

scheme; a dataglyph-based visual encoding scheme for the ciphertexts generated by the encryption scheme; and a randomized input and augmented reality scheme that protects user inputs without harming usability. We describe all aspects of

EyeDecrypt

, from security definitions, constructions and analysis, to implementation details of a prototype developed on a smartphone.

Andrea G. Forte, Juan A. Garay, Trevor Jim, Yevgeniy Vahlis

Functional Encryption

Semi-adaptive Attribute-Based Encryption and Improved Delegation for Boolean Formula

We consider

semi-adaptive

security for attribute-based encryption, where the adversary specifies the challenge attribute vector after it sees the public parameters but before it makes any secret key queries. We present two constructions of semi-adaptive attribute-based encryption under static assumptions with

short

ciphertexts. Previous constructions with short ciphertexts either achieve the weaker notion of selective security, or require parameterized assumptions.

As an application, we obtain improved delegation schemes for Boolean formula with

semi-adaptive

soundness, where correctness of the computation is guaranteed even if the client’s input is chosen adaptively depending on its public key. Previous delegation schemes for formula achieve one of adaptive soundness, constant communication complexity, or security under static assumptions; we show how to achieve semi-adaptive soundness and the last two simultaneously.

Jie Chen, Hoeteck Wee
Expressive Attribute-Based Encryption with Constant-Size Ciphertexts from the Decisional Linear Assumption

We propose a key-policy attribute-based encryption (KP-ABE) scheme with

constant-size ciphertexts

, whose selective security is proven under the

decisional linear (DLIN) assumption

in the standard model. The proposed scheme also has

semi-adaptively

security, which is a recently proposed notion of security. The access structure is expressive, that is given by

non-monotone span programs

. It also has fast decryption, i.e., a decryption includes only a constant number of pairing operations. As an application of our KP-ABE construction, we also propose a

fully secure

attribute-based signatures with constant-size secret (signing) keys from the DLIN. For achieving the above results, we employ a hierarchical reduction technique on dual pairing vector spaces and a modified form of pairwise independence lemma specific to our proposed schemes.

Katsuyuki Takashima

Invited Talk II

Functional Encryption and Its Impact on Cryptography

Functional encryption is a novel paradigm for public-key encryption that enables both fine-grained access control and selective computation on encrypted data, as is necessary to protect big, complex data in the cloud. In this article, we provide a brief introduction to functional encryption, and an overview of its overarching impact on the field of cryptography.

Hoeteck Wee

Cryptanalysis

Generic Attacks on Strengthened HMAC: n-bit Secure HMAC Requires Key in All Blocks

HMAC is the most widely used hash based MAC scheme. Recently, several generic attacks have been presented against HMAC with a complexity between 2

n

/2

and 2

n

, where

n

is the output size of an underlying hash function. In this paper, we investigate the security of strengthened HMAC in which the key is used to process underlying compression functions. With such a modification, the attacker is unable to precompute the property of the compression function offline, and thus previous generic attacks are prevented. In this paper, we show that keying the compression function in all blocks is necessary to prevent a generic internal state recovery attack with a complexity less than 2

n

. In other words, only with a single keyless compression function, the internal state is recovered faster than 2

n

. To validate the claim, we present a generic attack against the strengthened HMAC in which only one block is keyless, thus pre-computable offline. Our attack uses the previous generic attack by Naito

et al.

as a base. We improve it so that the attack can be applied only with a single keyless compression function while the attack complexity remains unchanged from the previous work.

Yu Sasaki, Lei Wang
Improved Indifferentiable Security Analysis of PHOTON

In this paper, we study the indifferentiable security of the domain extension algorithm of the

PHOTON

hash function that was proven to be indifferentiable from a random oracle up to

$\mathcal{O}(2^{\min\{ c/2, c^\prime/2 \}})$

query complexity, where

c

is the capacity in the absorbing step of

PHOTON

and

c

is that in the squeezing step. By reducing the size

c

, one can reduce the processing time spent by

PHOTON

, while the indifferentiable security is degraded. Note that there is no generic attack on

PHOTON

with

$\mathcal{O}(2^{c^\prime/2})$

query complexity. Thus it is interesting to investigate the optimality of the indifferentiable security and the size of

c

ensuring the

$\mathcal{O}(2^{c/2})$

security.

For these motivations, first, we prove that

PHOTON

is indifferentiable from a random oracle up to

$\mathcal{O}(\min \{ q_{\mathsf{mcoll}} (d^\ast,c-c^\prime ), 2^{c/2} \})$

query complexity where

q

mcoll

(

d

 ∗ 

,

c

 − 

c

) is the query complexity to find a

d

 ∗ 

-multi-collision of (

c

 − 

c

) bits of hash values and

d

 ∗ 

satisfies

$q_{\mathsf{mcoll}} (d^\ast,c-c^\prime ) = 2^{c^\prime }/d^\ast$

. We also show that there exists a generic attack on

PHOTON

with the same query complexity. Thus the indifferentiable security of our proof is

optimal

.

Second, by using this bound we study the parameter

c

ensuring the

$\mathcal{O}(2^{c/2})$

security. We show that the

$\mathcal{O}(2^{c/2})$

security is ensured if

c

 ≥ 

c

/2 + log

2

c

, which implies that we can reduce the processing time by

PHOTON

with keeping the same indifferentiable security.

Finally, we propose a faster construction than

PHOTON

with keeping the same indifferentiable security, where the length of the first message block is modified from

r

bits to

r

 + 

c

/2 bits.

Yusuke Naito, Kazuo Ohta

Secure Computation – Implementation

Faster Maliciously Secure Two-Party Computation Using the GPU

We present a new protocol for maliciously secure two-party computation based on cut-and-choose of garbled circuits using the recent idea of “forge-and-loose”, which eliminates around a factor 3 of garbled circuits that needs to be constructed and evaluated. Our protocol introduces a new way to realize the “forge-and-loose” approach, which avoids an auxiliary secure two-party computation protocol, does not rely on any number theoretic assumptions and parallelizes well in a same instruction, multiple data (SIMD) framework.

With this approach we prove our protocol universally composable-secure against a malicious adversary assuming access to oblivious transfer, commitment and coin-tossing functionalities in the random oracle model.

Finally, we construct, and benchmark, a SIMD implementation of this protocol using a GPU as a massive SIMD device. The findings compare favorably with all previous implementations of maliciously secure, two-party computation.

Tore Kasper Frederiksen, Thomas P. Jakobsen, Jesper Buus Nielsen
Systematizing Secure Computation for Research and Decision Support

We propose a framework for organizing and classifying research results in the active field of secure multiparty computation (MPC). Our

systematization of secure computation

consists of (1) a set of definitions circumscribing the MPC protocols to be considered; (2) a set of quantitative axes for classifying and comparing MPC protocols; and (3) a knowledge base of propositions specifying the known relations between axis values. We have classified a large number of MPC protocols on these axes and developed an interactive tool for exploring the problem space of secure computation. We also give examples of how this systematization can be put to use to foster new research and the adoption of MPC for real-world problems.

Jason Perry, Debayan Gupta, Joan Feigenbaum, Rebecca N. Wright
An Empirical Study and Some Improvements of the MiniMac Protocol for Secure Computation

Recent developments in Multi-party Computation (MPC) has resulted in very efficient protocols for dishonest majority in the preprocessing model. In particular, two very promising protocols for Boolean circuits have been proposed by Nielsen et al. (nicknamed TinyOT) and by Damgård and Zakarias (nicknamed MiniMac). While TinyOT has already been implemented, we present in this paper the first implementation of MiniMac, using the same platform as the existing TinyOT implementation. We also suggest several improvements of MiniMac, both on the protocol design and implementation level. In particular, we suggest a modification of MiniMac that achieves increased parallelism at no extra communication cost. This gives an asymptotic improvement of the original protocol as well as an 8-fold speed-up of our implementation. We compare the resulting protocol to TinyOT for the case of secure computation in parallel of a large number of AES encryptions and find that it performs better than results reported so far on TinyOT, on the same hardware.

Ivan Damgård, Rasmus Lauritsen, Tomas Toft

Zero Knowledge

Efficient NIZK Arguments via Parallel Verification of Benes Networks

We work within the recent paradigm, started by Groth (ASIACRYPT 2010), of constructing short non-interactive zero knowledge arguments from a small number basic arguments in a modular fashion. The main technical result of this paper is a new permutation argument, by using product and shift arguments of Lipmaa (2014) and a parallelizable variant of the Beneš network. We use it to design a short non-interactive zero knowledge argument for the

NP

-complete language

CircuitSAT

with Θ(

n

log

2

n

) prover’s computational complexity, where

n

is the size of the circuit. The permutation argument can be naturally used to design direct NIZK arguments for many other

NP

-complete languages.

Helger Lipmaa
Non-Malleable Zero Knowledge: Black-Box Constructions and Definitional Relationships

This paper deals with efficient non-malleable zero-knowledge proofs for

$\mathcal{NP}$

, based on general assumptions. We give the first construction of a simulation-sound zero-knowledge (

$\mathcal{ZK}$

) protocol for

$\mathcal{NP}$

based only on the

black-box

use of

one-way functions

. Constructing such a proof system has been an open question ever since the original work of Dolev, Dwork, and Naor [18]. In addition to the feasibility result, our protocol has a

constant

number of rounds, which is asymptotically optimal.

Traditionally, the term non-malleable zero-knowledge (

NmZK

) refers to the original definition of [18]; but today it is used loosely to also refer to

simulation-soundness

(

simsound

) [51], and

simulation-extractability

(

SimExt

) [47]. While

SimExt

implies

NmZK

, the common perception is that

SimExt

is strongest of the three notions. However, very few results about their exact relationship are known.

In the second part of this work, we provide further results about the exact relationship between these notions. We show that in the “static” case, if an

NmZK

protocol is also an

argument-of-knowledge

, then it is in fact

SimExt

. Furthermore, in the most strict sense of the definition,

SimSound

does not necessarily follow from

SimExt

. These results are somewhat surprising because they are opposite to the common perception that

SimExt

is the strongest of the three notions.

Abhishek Jain, Omkant Pandey
On Adaptively Secure Protocols

Adaptive security captures the capability of an adversary to adaptively affect a system during the course of its computation based on partial information gathered. In this work, we explore the theoretical complexity of achieving adaptive security in two settings:

1

Adaptive UC-Secure Computation:

We provide a round-efficient compiler that transforms any stand-alone semi-honest adaptively secure multiparty computation to adaptive UC-security. Recently, Dana et. al (Asiacrypt 2013) showed how to acheive adaptive UC-security in any trusted setup under minimal assumptions. They achieve this by constructing an

O

(

n

)-round adaptively secure concurrent non-malleable commitment scheme. The main contribution of our work shows how to achieve the same in

O

(1)-rounds.

2

Zero-Knowledge with Adaptive Inputs:

Lin and Pass in (TCC 2011) gave first constructions of concurrent non-malleable zero-know-ledge proofs secure w.r.t.

adaptively

chosen inputs in the plain model in a restricted setting, namely, where the adversary can only ask for proofs of

true

(adaptively-chosen) statements. We extend their definition to the fully-adaptive setting and show how to construct a protocol that satisfies this definition. As an independent contribution we provide a simple and direct compilation of any semi-honest secure protocol to a fully concurrently secure protocol under polynomial-time assumptions in the Angel-Based UC-Security.

Muthuramakrishnan Venkitasubramaniam

Message Authentication

Key-Indistinguishable Message Authentication Codes

While standard message authentication codes (MACs) guarantee authenticity of messages, they do not, in general, guarantee the anonymity of the sender and the recipient. For example it may be easy for an observer to determine whether or not two authenticated messages were sent by the same party even without any information about the secret key used. However preserving any uncertainty an attacker may have about the identities of honest parties engaged in authenticated communication is an important goal of many cryptographic applications. For example this is stated as an explicit goal of modern cellphone authentication protocols [rGPP12] and RFID based authentication systems [Vau10].

In this work we introduce and construct a new fundamental cryptographic primitive called

key indistinguishable

(KI) MACs. These can be used to realize many of the most important higher-level applications requiring some form of anonymity and authenticity [AHM

 + 

14a]. We show that much (though not all) of the modular MAC construction framework of [DKPW12] gives rise to several variants of KI MACs. On the one hand, we show that KI MACs can be built from hash proof systems and certain weak PRFs allowing us to base security on assumptions such as DDH, CDH and LWE. Next we show that the two direct constructions from the LPN assumption of [DKPW12] are KI, resulting in particularly efficient constructions based on structured assumptions. On the other hand, we also give a very simple and efficient construction based on a PRF which allows us to base KI MACs on some ideal primitives such as an ideal compression function (using HMAC) or block-cipher (using say CBC-MAC). In particular, by using our PRF construction, many real-world implementations of MACs can be easily and cheaply modified to obtain a KI MAC. Finally we show that the transformations of [DKPW12] for increasing the domain size of a MAC as well as for strengthening the type of unforgeability it provides also preserve (or even strengthen) the type of KI enjoyed by the MAC. All together these results provide a wide range of assumptions and construction paths for building various flavors of this new primitive.

Joël Alwen, Martin Hirt, Ueli Maurer, Arpita Patra, Pavel Raykov
Interactive Encryption and Message Authentication

Public-Key Encryption (PKE) and Message Authentication (PKMA, aka as digital signatures) are fundamental cryptographic primitives. Traditionally, both notions are defined as non-interactive (i.e., single-message). In this work, we initiate rigorous study of (possibly)

interactive

PKE and PKMA schemes. We obtain the following results demonstrating the power of interaction to resolve questions which are either open or impossible in the non-interactive setting.

Efficiency/Assumptions.

One of the most well known open questions in the area of PKE is to build, in a “black-box way”, so called chosen ciphertext attack (CCA-) secure PKE from chosen plaintext attack (CPA-) secure PKE. In contrast, we show a simple 2-round CCA-secure PKE from any (non-interactive) CPA-secure PKE (in fact, these primitives turn out to be equivalent). Similarly, although non-interactive PKMA schemes can be inefficiently built from any one-way function, no efficient signature schemes are known from many popular number-theoretic assumptions, such as factoring, CDH or DDH. In contrast, we show an efficient 2-round PKMA from most popular assumptions, including factoring, CDH and DDH.

Advanced Properties.

It is well known that no non-interactive signature (resp. encryption) scheme can be

deniable

(resp.

forward-secure

), since the signature (resp. ciphertext) can later “serve as an evidence of the sender’s consent” (resp. “be decrypted if the receiver’s key is compromised”). We also formalize a related notion of

replay-secure

(necessarily) interactive PKMA (resp. PKE) schemes, where the verifier (resp. encryptor) is assured that the “current” message can only be authenticated (resp. decrypted) by the secret key owner

now

, as opposed to some time in the past (resp. future). We observe that our 2-round PKMA scheme is both replay-secure and (passively) deniable, and our 2-round PKE scheme is both replay- and forward-secure.

Yevgeniy Dodis, Dario Fiore

Invited Talk III

Homomorphic Signatures and Message Authentication Codes

Homomorphic message authenticators allow to validate computation on previously signed data. The holder of a dataset {

m

1

, …,

m

} uses her secret key

sk

to produce corresponding tags (

σ

1

, …,

σ

) and stores the authenticated dataset on a remote server. Later the server can (publicly) compute

m

 = 

f

(

m

1

, …,

m

) together with a succinct tag

σ

certifying that

m

is the correct output of the computation

f

. A nice feature of homomorphic authenticators is that the validity of this tag can be verified

without

having to know the original dataset. This latter property makes the primitive attractive in a variety of context and applications, including, for instance, verifiable delegation of computation on outsourced data.

In this short survey, I will give an overview of the state of the art in the areas of homomorphic signatures and message authentication codes. I will (briefly) describe some of the most recent results and provide an overview of the main challenges that remain to address.

Dario Catalano

Proofs of Space and Erasure

Efficient Proofs of Secure Erasure

A proof of secure erasure (PoSE) enables a space restricted prover to convince a verifier that he has erased his memory of size

S

. So far the only known PoSEs have linear communication complexity in

S

or quadratic computation complexity in

S

, hence their applicability is limited, since Θ(

S

) communication or Θ(

S

2

) computation can be quite impractical (e.g., for devices with

S

memory words when

S

is in the order of GB’s). In this work we put forth two new PoSEs that for the first time achieve

sublinear

communication and

quasilinear

computation complexity hence they are more efficient than what was previously known. Efficiency comes at the price of slightly more relaxed security guarantees that we describe and motivate.

Nikolaos P. Karvelas, Aggelos Kiayias
Proofs of Space: When Space Is of the Essence

Proofs of computational effort were devised to control denial of service attacks. Dwork and Naor (CRYPTO ’92), for example, proposed to use such proofs to discourage spam. The idea is to couple each email message with a proof of work that demonstrates the sender performed some computational task. A proof of work can be either CPU-bound or memory-bound. In a CPU-bound proof, the prover must compute a CPU-intensive function that is easy to check by the verifier. A memory-bound proof, instead, forces the prover to access the main memory several times, effectively replacing CPU cycles with memory accesses.

In this paper we put forward a new concept dubbed

proof of space

. To compute such a proof, the prover must use a specified amount of space, i.e., we are not interested in the number of accesses to the main memory (as in memory-bound proof of work) but rather on the amount of actual memory the prover must employ to compute the proof. We give a complete and detailed algorithmic description of our model. We develop a full theoretical analysis which uses combinatorial tools from Complexity Theory (such as pebbling games) which are essential in studying space lower bounds.

Giuseppe Ateniese, Ilario Bonacina, Antonio Faonio, Nicola Galesi

Public-Key Encryption

Chosen Ciphertext Security on Hard Membership Decision Groups: The Case of Semi-smooth Subgroups of Quadratic Residues

Nowadays, the chosen ciphertext (CCA) security is considered as the de facto standard security notion for public key encryption (PKE). CCA secure PKE schemes are often constructed on efficiently recognizable groups i.e., groups where the corresponding membership decision problem is easy. On the other hand, when we prove the CCA security of PKE schemes on not efficiently recognizable groups, much care are required. For example, even if a decryption query involves an unexpected element out of the group which causes a problem, the challenger cannot detect it due to the hardness of the membership decision for the group. However, such a possibility is often overlooked.

As an example of such a group, in this paper, we consider the

semi-smooth subgroup

which was proposed by Groth (TCC 2005) for enhancing efficiency of factoring-based cryptographic primitives. Specifically, we propose a general technique to guarantee the CCA security of PKE schemes on the semi-smooth subgroup. Roughly speaking, we prove that for almost all natural “verification equations,” it is impossible to generate a query which does not consist of elements in the group and satisfies the equation if the factoring problem is hard. Hence, queries whose components are not in the group will be automatically rejected even though the simulator cannot recognize whether these components are in the group or not. By the same technique, we also prove that the strong Diffie-Hellman assumption holds on the “signed” semi-smooth subgroup under the factoring assumption, and improve the efficiency of a factoring-based non-interactive key exchange scheme by instantiating it on the semi-smooth subgroup.

Takashi Yamakawa, Shota Yamada, Koji Nuida, Goichiro Hanaoka, Noboru Kunihiro
On Selective-Opening Attacks against Encryption Schemes

At FOCS’99, Dwork et al put forth the notion of ‘selective–opening attacks’ (SOAs, for short). In the literature, security against such attacks has been formalized via indistinguishability-based and simulation-based notions, respectively called IND-SO-CPA security and SIM-SO-CPA security. Furthermore, the IND-SO-CPA notion has been studied under two flavors – weak-IND-SO-CPA and full-IND-SO-CPA security. At Eurocrypt’09, Bellare et al showed the first positive results on SOA security of encryption schemes: 1) any lossy encryption scheme is weak-IND-SO-CPA secure; 2) any lossy encryption scheme with efficient openability is SIM-SO–CPA secure.

Despite rich further work on SOA security, the (un)feasibility of full–IND-SO-CPA remains a major open problem in the area of SOA security. The elusive nature of the full-IND-SO-CPA notion of security is attributed to a specific aspect of the security game, namely, the challenger requiring to perform a super-polynomial time task. Not only do we not know whether there exists a scheme that is full-IND-SO-CPA secure, but we also do not know concrete attacks against popular schemes such as the ElGamal and Cramer-Shoup schemes in the full-IND-SO-CPA model.

The contribution of our work is three-fold.

1

Motivated by the difficulty in understanding (un)feasibility of the full-IND-SO-CPA notion, we study a variant of this notion that is closer in spirit to the IND-CPA notion but still embodies the security captured by the full-IND-SO-CPA notion. We observe that the

weak

form of our variation does not introduce any significant change to the weak-IND-SO-CPA notion; that is, the

weak

form of our notion is equivalent to the weak-IND-SO-CPA notion.

2

Interestingly, we can show that a large class of encryption schemes can be proven insecure for the

full

form of our notion. The large class includes most known constructions of weak-IND-SO-CPA secure schemes and SIM-SO-CPA secure schemes and also popular schemes like the ElGamal and Cramer-Shoup schemes.

3

Our third contribution studies the complexity of SIM-SO-CPA security. Complementing the result of Bellare et al, we show that lossiness is not necessary to achieve SIM-SO-CPA security. More specifically, we present a SIM-SO-CPA scheme that is not a lossy encryption scheme (regardless of efficient openability). Since SIM-SO-CPA security implies weak-IND-SO-CPA security, it follows as a corollary that the converses of both the implications proved by Bellare et al do not hold. Furthermore, as a corollary of our techniques, on a slightly unrelated but useful note, we obtain that lossiness is not required to obtain non-committing encryption. Previously, at Eurocrypt’09, Fehr et al showed a construction of a non-committing encryption scheme from trapdoor permutations and this scheme was, as noted by the authors, possibly not lossy. Our scheme amounts to the first construction of a non-committing encryption scheme that is provably not lossy.

Rafail Ostrovsky, Vanishree Rao, Ivan Visconti
Narrow Bandwidth Is Not Inherent in Reverse Public-Key Encryption

Reverse Public-Key Encryption (RPKE) is a mode of operation exploiting a weak form of key privacy to provide message privacy. In principle, RPKE offers a fallback mode, if the underlying encryption scheme’s message secrecy fails while a weak form of key privacy survives. To date, all published RPKE constructions suffer from a low bandwidth, and low bandwidth seems naturally inherent to reverse encryption. We show how reverse encryption can, in connection with and as a novel application of anonymous broadcast encryption, achieve high-bandwidth. We point out that by using traditional and reverse encryption simultaneously, a form of crypto-steganographic channel inside a cryptosystem can be provided.

David Naccache, Rainer Steinwandt, Adriana Suárez Corona, Moti Yung
Backmatter
Metadata
Title
Security and Cryptography for Networks
Editors
Michel Abdalla
Roberto De Prisco
Copyright Year
2014
Publisher
Springer International Publishing
Electronic ISBN
978-3-319-10879-7
Print ISBN
978-3-319-10878-0
DOI
https://doi.org/10.1007/978-3-319-10879-7

Premium Partner