Skip to main content

2014 | Buch

Advances in Cryptology – ASIACRYPT 2014

20th International Conference on the Theory and Application of Cryptology and Information Security, Kaoshiung, Taiwan, R.O.C., December 7-11, 2014, Proceedings, Part II

herausgegeben von: Palash Sarkar, Tetsu Iwata

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 8873 and 8874 constitutes the refereed proceedings of the 20th International Conference on the Theory and Applications of Cryptology and Information Security, ASIACRYPT 2014, held in Kaoshiung, Taiwan, in December 2014. The 55 revised full papers and two invited talks presented were carefully selected from 255 submissions. They are organized in topical sections on cryptology and coding theory; authenticated encryption; symmetric key cryptanalysis; side channel analysis; hyperelliptic curve cryptography; factoring and discrete log; cryptanalysis; signatures; zero knowledge; encryption schemes; outsourcing and delegation; obfuscation; homomorphic cryptography; secret sharing; block ciphers and passwords; black-box separation; composability; multi-party computation.

Inhaltsverzeichnis

Frontmatter

Encryption Schemes

Concise Multi-challenge CCA-Secure Encryption and Signatures with Almost Tight Security
Abstract
To gain strong confidence in the security of a public-key scheme, it is most desirable for the security proof to feature a tight reduction between the adversary and the algorithm solving the underlying hard problem. Recently, Chen and Wee (Crypto’13) described the first Identity-Based Encryption scheme with almost tight security under a standard assumption. Here, “almost tight” means that the security reduction only loses a factor O(λ) —where λ is the security parameter— instead of a factor proportional to the number of adversarial queries. Chen and Wee also gave the shortest signatures whose security almost tightly relates to a simple assumption in the standard model. Also recently, Hofheinz and Jager (Crypto ’12) constructed the first CCA-secure public-key encryption scheme in the multi-user setting with tight security. These constructions give schemes that are significantly less efficient in length (and thus, processing) when compared with the earlier schemes with loose reductions in their proof of security. Hofheinz and Jager’s scheme has a ciphertext of a few hundreds of group elements, and they left open the problem of finding truly efficient constructions. Likewise, Chen and Wee’s signatures and IBE schemes are somewhat less efficient than previous constructions with loose reductions from the same assumptions. In this paper, we consider space-efficient schemes with security almost tightly related to standard assumptions. We construct an efficient CCA-secure public-key encryption scheme whose chosen-ciphertext security in the multi-challenge, multi-user setting almost tightly relates to the DLIN assumption (in the standard model). Quite remarkably, the ciphertext size decreases to 69 group elements under the DLIN assumption whereas the best previous solution required about 400 group elements. Our scheme is obtained by taking advantage of a new almost tightly secure signature scheme (in the standard model) which is based on the recent concise proofs of linear subspace membership in the quasi-adaptive non-interactive zero-knowledge setting (QA-NIZK) defined by Jutla and Roy (Asiacrypt’13). Our signature scheme reduces the length of the previous such signatures (by Chen and Wee) by 37% under the Decision Linear assumption, by almost 50% under the K-LIN assumption, and it becomes only 3 group elements long under the Symmetric eXternal Diffie-Hellman assumption. Our signatures are obtained by carefully combining the proof technique of Chen and Wee and the above mentioned QA-NIZK proofs.
Benoît Libert, Marc Joye, Moti Yung, Thomas Peters
Efficient Identity-Based Encryption over NTRU Lattices
Abstract
Efficient implementations of lattice-based cryptographic schemes have been limited to only the most basic primitives like encryption and digital signatures. The main reason for this limitation is that at the core of many advanced lattice primitives is a trapdoor sampling algorithm (Gentry, Peikert, Vaikuntanathan, STOC 2008) that produced outputs that were too long for practical applications. In this work, we show that using a particular distribution over NTRU lattices can make GPV-based schemes suitable for practice. More concretely, we present the first lattice-based IBE scheme with practical parameters – key and ciphertext sizes are between two and four kilobytes, and all encryption and decryption operations take approximately one millisecond on a moderately-powered laptop. As a by-product, we also obtain digital signature schemes which are shorter than the previously most-compact ones of Ducas, Durmus, Lepoint, and Lyubashevsky from Crypto 2013.
Léo Ducas, Vadim Lyubashevsky, Thomas Prest
Order-Preserving Encryption Secure Beyond One-Wayness
Abstract
Semantic-security of individual plaintext bits given the corresponding ciphertext is a fundamental notion in modern cryptography. We initiate the study of this basic problem for Order-Preserving Encryption (OPE), asking “what plaintext information can be semantically hidden by OPE encryptions?” OPE has gained much attention in recent years due to its usefulness for secure databases, and has received a thorough formal treamtment with innovative and useful security notions. However, all previous notions are one-way based, and tell us nothing about partial-plaintext indistinguishability (semantic security).
In this paper, we propose the first indistinguishability-based security notion for OPE, which can ensure secrecy of lower bits of a plaintext (under essentially a random ciphertext probing setting). We then justify the definition, from the theoretical plausibility and practicality aspects. Finally, we propose a new scheme satisfying this security notion (the first one to do so). In order to be clear, we note that the earlier security notions, while innovative and surprising, nevertheless tell us nothing about the above partial- plaintext indistinguishability because they are limited to being one-way-based.
Isamu Teranishi, Moti Yung, Tal Malkin

Outsourcing and Delegation

Statistically-secure ORAM with $\tilde{O}(\log^2 n)$ Overhead
Abstract
We demonstrate a simple, statistically secure, ORAM with computational overhead \(\tilde{O}(\log^2 n)\); previous ORAM protocols achieve only computational security (under computational assumptions) or require \(\tilde{\Omega}(\log^3 n)\) overheard. An additional benefit of our ORAM is its conceptual simplicity, which makes it easy to implement in both software and (commercially available) hardware.
Our construction is based on recent ORAM constructions due to Shi, Chan, Stefanov, and Li (Asiacrypt 2011) and Stefanov and Shi (ArXiv 2012), but with some crucial modifications in the algorithm that simplifies the ORAM and enable our analysis. A central component in our analysis is reducing the analysis of our algorithm to a “supermarket” problem; of independent interest (and of importance to our analysis,) we provide an upper bound on the rate of “upset” customers in the “supermarket” problem.
Kai-Min Chung, Zhenming Liu, Rafael Pass
Adaptive Security of Constrained PRFs
Abstract
Constrained pseudorandom functions have recently been introduced independently by Boneh and Waters (Asiacrypt’13), Kiayias et al. (CCS’13), and Boyle et al. (PKC’14). In a standard pseudorandom function (PRF) a key k is used to evaluate the PRF on all inputs in the domain. Constrained PRFs additionally offer the functionality to delegate “constrained” keys k S which allow to evaluate the PRF only on a subset S of the domain.
The three above-mentioned papers all show that the classical GGM construction (J.ACM’86) of a PRF from a pseudorandom generator (PRG) directly yields a constrained PRF where one can compute constrained keys to evaluate the PRF on all inputs with a given prefix. This constrained PRF has already found many interesting applications. Unfortunately, the existing security proofs only show selective security (by a reduction to the security of the underlying PRG). To achieve full security, one has to use complexity leveraging, which loses an exponential factor 2 N in security, where N is the input length.
The first contribution of this paper is a new reduction that only loses a quasipolynomial factor q logN , where q is the number of adversarial queries. For this we develop a new proof technique which constructs a distinguisher by interleaving simple guessing steps and hybrid arguments a small number of times. This approach might be of interest also in other contexts where currently the only technique to achieve full security is complexity leveraging.
Our second contribution is concerned with another constrained PRF, due to Boneh and Waters, which allows for constrained keys for the more general class of bit-fixing functions. Their security proof also suffers from a 2 N loss, which we show is inherent. We construct a meta-reduction which shows that any “simple” reduction of full security from a non-interactive hardness assumption must incur an exponential security loss.
Georg Fuchsbauer, Momchil Konstantinov, Krzysztof Pietrzak, Vanishree Rao

Obfuscation

Poly-Many Hardcore Bits for Any One-Way Function and a Framework for Differing-Inputs Obfuscation
Abstract
We show how to extract an arbitrary polynomial number of simultaneously hardcore bits from any one-way function. In the case the one-way function is injective or has polynomially-bounded pre-image size, we assume the existence of indistinguishability obfuscation (iO). In the general case, we assume the existence of differing-input obfuscation (diO), but of a form weaker than full auxiliary-input diO. Our construction for injective one-way functions extends to extract hardcore bits on multiple, correlated inputs, yielding new D-PKE schemes. Of independent interest is a definitional framework for differing-inputs obfuscation in which security is parameterized by circuit-sampler classes.
Mihir Bellare, Igors Stepanovs, Stefano Tessaro
Using Indistinguishability Obfuscation via UCEs
Abstract
We provide the first standard model construction for a powerful class of Universal Computational Extractors (UCEs; Bellare et al. Crypto 2013) based on indistinguishability obfuscation. Our construction suffices to instantiate q-query correlation-secure hash functions and to extract polynomially many hardcore bits from any one-way function.
For many cryptographic primitives and in particular for correlation-secure hash functions all known constructions are in the random-oracle model. Indeed, recent negative results by Wichs (ITCS 2013) rule out a large class of techniques to prove the security of correlation-secure hash functions in the standard model. Our construction is based on puncturable PRFs (Sahai und Waters; STOC 2014) and indistinguishability obfuscation. However, our proof also relies on point obfuscation under auxiliary inputs (AIPO). This is crucial in light of Wichs’ impossibility result. Namely, Wichs proves that it is often hard to reduce two-stage games (such as UCEs) to a “one-stage assumption” such as DDH. In contrast, AIPOs and their underlying assumptions are inherently two-stage and, thus, allow us to circumvent Wichs’ impossibility result.
Our positive result is also noteworthy insofar as Brzuska, Farshim and Mittelbach (Crypto 2014) have shown recently, that iO and some variants of UCEs are mutually exclusive. Our results, hence, validate some of the new UCE notions that emerged as a response to the iO-attack.
Christina Brzuska, Arno Mittelbach
Indistinguishability Obfuscation versus Multi-bit Point Obfuscation with Auxiliary Input
Abstract
In a recent celebrated breakthrough, Garg et al. (FOCS 2013) gave the first candidate for so-called indistinguishability obfuscation (iO) thereby reviving the interest in obfuscation for a general purpose. Since then, iO has been used to advance numerous sub-areas of cryptography. While indistinguishability obfuscation is a general purpose obfuscation scheme, several obfuscators for specific functionalities have been considered. In particular, special attention has been given to the obfuscation of so-called point functions that return zero everywhere, except for a single point x. A strong variant is point obfuscation with auxiliary input (AIPO), which allows an adversary to learn some non-trivial auxiliary information about the obfuscated point x (Goldwasser, Tauman-Kalai; FOCS, 2005).
Multi-bit point functions are a strengthening of point functions, where on x, the point function returns a string m instead of 1. Multi-bit point functions with auxiliary input (MB-AIPO) have been constructed from composable AIPO by Canetti and Dakdouk (Eurocrypt 2008) and have been used by Matsuda and Hanaoka (TCC 2014) to construct CCA-secure public-key encryption schemes and by Bitansky and Paneth (TCC 2012) to construct three-round weak zero-knowledge protocols for NP.
In this paper we present both positive and negative results. We show that if indistinguishability obfuscation exists, then MB-AIPO does not. Towards this goal, we build on techniques by Brzuska, Farshim and Mittelbach (Crypto 2014) who use indistinguishability obfuscation as a mean to attack a large class of assumptions from the Universal Computational Extractor framework (Bellare, Hoang and Keelveedhi; Crypto 2013). On the positive side we introduce a weak version of MB-AIPO which we deem to be outside the reach of our impossibility result. We build this weak version of MB-AIPO based on iO and AIPO and prove that it suffices to construct a public-key encryption scheme that is secure even if the adversary can learn an arbitrary leakage function of the secret key, as long as the secret key remains computationally hidden. Thereby, we strengthen a result by Canetti et al. (TCC 2010) that showed a similar connection in the symmetric-key setting.
Christina Brzuska, Arno Mittelbach
Bootstrapping Obfuscators via Fast Pseudorandom Functions
Abstract
We show that it is possible to upgrade an obfuscator for a weak complexity class WEAK into an obfuscator for arbitrary polynomial size circuits, assuming that the class WEAK can compute pseudorandom functions. Specifically, under standard intractability assumptions (e.g., hardness of factoring, Decisional Diffie-Hellman, or Learning with Errors), the existence of obfuscators for NC 1 or even TC 0 implies the existence of general-purpose obfuscators for P. Previously, such a bootstrapping procedure was known to exist under the assumption that there exists a fully-homomorphic encryption whose decryption algorithm can be computed in WEAK. Our reduction works with respect to virtual black-box obfuscators and relativizes to ideal models..
Benny Applebaum

Homomorphic Cryptography

Homomorphic Authenticated Encryption Secure against Chosen-Ciphertext Attack
Abstract
We study homomorphic authenticated encryption, where privacy and authenticity of data are protected simultaneously. We define homomorphic versions of various security notions for privacy and authenticity, and investigate relations between them. In particular, we show that it is possible to give a natural definition of IND-CCA for homomorphic authenticated encryption, unlike the case of homomorphic encryption. Also, we construct a simple homomorphic authenticated encryption scheme supporting arithmetic circuits, which is chosen-ciphertext secure both for privacy and authenticity. Our scheme is based on the error-free approximate GCD assumption.
Chihong Joo, Aaram Yun
Authenticating Computation on Groups: New Homomorphic Primitives and Applications
Abstract
In this paper we introduce new primitives to authenticate computation on data expressed as elements in (cryptographic) groups. As for the case of homomorphic authenticators, our primitives allow to verify the correctness of the computation without having to know of the original data set. More precisely, our contributions are two-fold.
First, we introduce the notion of linearly homomorphic authenticated encryption with public verifiability and show how to instantiate this primitive (in the random oracle model) to support Paillier’s ciphertexts. This immediately yields a very simple and efficient (publicly) verifiable computation mechanism for encrypted (outsourced) data based on Paillier’s cryptosystem.
As a second result, we show how to construct linearly homomorphic signature schemes to sign elements in bilinear groups (LHSG for short). Such type of signatures are very similar to (linearly homomorphic) structure preserving ones, but they allow for more flexibility, as the signature is explicitly allowed to contain components which are not group elements. In this sense our contributions are as follows. First we show a very simple construction of LHSG that is secure against weak random message attack (RMA). Next we give evidence that RMA secure LHSG are interesting on their own right by showing applications in the context of on-line/off-line homomorphic and network coding signatures. This notably provides what seems to be the first instantiations of homomorphic signatures achieving on-line/off-line efficiency trade-offs. Finally, we present a generic transform that converts RMA-secure LHSG into ones that achieve full security guarantees.
Dario Catalano, Antonio Marcedone, Orazio Puglisi
Compact VSS and Efficient Homomorphic UC Commitments
Abstract
We present a new compact verifiable secret sharing scheme, based on this we present the first construction of a homomorphic UC commitment scheme that requires only cheap symmetric cryptography, except for a small number of seed OTs. To commit to a k-bit string, the amortized communication cost is O(k) bits. Assuming a sufficiently efficient pseudorandom generator, the computational complexity is O(k) for the verifier and O(k 1 + ε ) for the committer (where ε < 1 is a constant). In an alternative variant of the construction, all complexities are O(k·polylog(k)). Our commitment scheme extends to vectors over any finite field and is additively homomorphic. By sending one extra message, the prover can allow the verifier to also check multiplicative relations on committed strings, as well as verifying that committed vectors a, b satisfy a = φ( b) for a linear function φ. These properties allow us to non-interactively implement any one-sided functionality where only one party has input (this includes UC secure zero-knowledge proofs of knowledge). We also present a perfectly secure implementation of any multiparty functionality, based directly on our VSS. The communication required is proportional to a circuit implementing the functionality, up to a logarithmic factor. For a large natural class of circuits the overhead is even constant. We also improve earlier results by Ranellucci et al. on the amount of correlated randomness required for string commitments with individual opening of bits.
Ivan Damgård, Bernardo David, Irene Giacomelli, Jesper Buus Nielsen

Secret Sharing

Round-Optimal Password-Protected Secret Sharing and T-PAKE in the Password-Only Model
Abstract
In a Password-Protected Secret Sharing (PPSS) scheme with parameters (t,n) (formalized by Bagherzandi et al.[2]), a user Alice stores secret information among n servers so that she can later recover the information solely on the basis of her password. The security requirement is similar to a (t,n)-threshold secret sharing, i.e., Alice can recover her secret as long as she can communicate with t + 1 honest servers but an attacker gaining access to t servers cannot learn any information about the secret. In particular, the system is secure against offline password attacks by an attacker controlling up to t servers. On the other hand, accounting for inevitable on-line attacks one allows the attacker an advantage proportional to the fraction of dictionary passwords tested in on-line interactions with the user and servers.
We present the first round-optimal PPSS scheme, requiring just one message from user to server and from server to user, and prove its security in the challenging password-only setting where users do not have access to an authenticated public key. The scheme uses an Oblivious PRF whose security we define using a UC-style ideal functionality for which we show concrete, truly practical realizations in the random oracle model as well as standard-model instantiations. As an important application we use this scheme to build the first single-round password-only Threshold-PAKE protocol in the CRS and ROM models for arbitrary (t,n) parameters with no PKI requirements for any party (clients or servers) and no inter-server communication. Our T-PAKE protocols are built by combining suitable key exchange protocols on top of our PPSS schemes. We prove T-PAKE security via a generic composition theorem showing the security of any such composed protocol.
Stanislaw Jarecki, Aggelos Kiayias, Hugo Krawczyk
Secret-Sharing for NP
Abstract
A computational secret-sharing scheme is a method that enables a dealer, that has a secret, to distribute this secret among a set of parties such that a “qualified” subset of parties can efficiently reconstruct the secret while any “unqualified” subset of parties cannot efficiently learn anything about the secret. The collection of “qualified” subsets is defined by a monotone Boolean function.
It has been a major open problem to understand which (monotone) functions can be realized by a computational secret-sharing scheme. Yao suggested a method for secret-sharing for any function that has a polynomial-size monotone circuit (a class which is strictly smaller than the class of monotone functions in P). Around 1990 Rudich raised the possibility of obtaining secret-sharing for all monotone functions in NP: In order to reconstruct the secret a set of parties must be “qualified” and provide a witness attesting to this fact.
Recently, Garg et al. [14] put forward the concept of witness encryption, where the goal is to encrypt a message relative to a statement x ∈ L for a language L ∈ NP such that anyone holding a witness to the statement can decrypt the message, however, if x ∉ L, then it is computationally hard to decrypt. Garg et al.showed how to construct several cryptographic primitives from witness encryption and gave a candidate construction.
One can show that computational secret-sharing implies witness encryption for the same language. Our main result is the converse: we give a construction of a computational secret-sharing scheme for any monotone function in NP assuming witness encryption for NP and one-way functions. As a consequence we get a completeness theorem for secret-sharing: computational secret-sharing scheme for any single monotone NP-complete function implies a computational secret-sharing scheme for every monotone function in NP.
Ilan Komargodski, Moni Naor, Eylon Yogev

Block Ciphers and Passwords

Tweaks and Keys for Block Ciphers: The TWEAKEY Framework
Abstract
We propose the TWEAKEY framework with goal to unify the design of tweakable block ciphers and of block ciphers resistant to related-key attacks. Our framework is simple, extends the key-alternating construction, and allows to build a primitive with arbitrary tweak and key sizes, given the public round permutation (for instance, the AES round). Increasing the sizes renders the security analysis very difficult and thus we identify a subclass of TWEAKEY, that we name STK, which solves the size issue by the use of finite field multiplications on low hamming weight constants. Overall, this construction allows a significant increase of security of well-known authenticated encryptions mode like ΘCB3 from birthday-bound security to full security, where a regular block cipher was used as a black box to build a tweakable block cipher. Our work can also be seen as advances on the topic of secure key schedule design.
Jérémy Jean, Ivica Nikolić, Thomas Peyrin
Memory-Demanding Password Scrambling
Abstract
Most of the common password scramblers hinder password-guessing attacks by “key stretching”, e.g., by iterating a cryptographic hash function many times. With the increasing availability of cheap and massively parallel off-the-shelf hardware, iterating a hash function becomes less and less useful. To defend against attacks based on such hardware, one can exploit their limitations regarding to the amount of fast memory for each single core. The first password scrambler taking this into account was scrypt. In this paper we mount a cache-timing attack on scrypt by exploiting its password-dependent memory-access pattern. Furthermore, we show that it is possible to apply an efficient password filter for scrypt based on a malicious garbage collector. As a remedy, we present a novel password scrambler called Catena which provides both a password-independent memory-access pattern and resistance against garbage-collector attacks. Furthermore, Catena instantiated with the here introduced (G,λ)-DBH operation satisfies a certain time-memory tradeoff called λ-memory-hardness, i.e., using only 1/b the amount of memory, the time necessary to compute the password hash is increased by a factor of b λ . Finally, we introduce a more efficient instantiation of Catena based on a bit-reversal graph.
Christian Forler, Stefan Lucks, Jakob Wenzel

Side Channel Analysis II

Side-Channel Analysis of Multiplications in GF(2128)
Application to AES-GCM
Abstract
In this paper, we study the side-channel security of the field multiplication in GF(2 n ). We particularly focus on GF(2128) multiplication which is the one used in the authentication part of \(\mathsf{AES}\textrm{-}\mathsf{GCM}\) but the proposed attack also applies to other binary extensions. In a hardware implementation using a 128-bit multiplier, the full 128-bit secret is manipulated at once. In this context, classical DPA attacks based on the divide and conquer strategy cannot be applied. In this work, the algebraic structure of the multiplication is leveraged to recover bits of information about the secret multiplicand without having to perform any key-guess. To do so, the leakage corresponding to the writing of the multiplication output into a register is considered. It is assumed to follow a Hamming weight/distance leakage model. Under these particular, yet easily met, assumption we exhibit a nice connection between the key recovery problem and some classical coding and Learning Parities with Noise problems with certain instance parameters. In our case, the noise is very high, but the length of the secret is rather short. In this work we investigate different solving techniques corresponding to different attacker models and eventually refine the attack when considering particular implementations of the multiplication.
Sonia Belaïd, Pierre-Alain Fouque, Benoît Gérard
Higher-Order Threshold Implementations
Abstract
Higher-order differential power analysis attacks are a serious threat for cryptographic hardware implementations. In particular, glitches in the circuit make it hard to protect the implementation with masking. The existing higher-order masking countermeasures that guarantee security in the presence of glitches use multi-party computation techniques and require a lot of resources in terms of circuit area and randomness. The Threshold Implementation method is also based on multi-party computation but it is more area and randomness efficient. Moreover, it typically requires less clock-cycles since all parties can operate simultaneously. However, so far it is only provable secure against 1st-order DPA. We address this gap and extend the Threshold Implementation technique to higher orders. We define generic constructions and prove their security. To illustrate the approach, we provide 1st, 2nd and 3rd-order DPA-resistant implementations of the block cipher KATAN-32. Our analysis of 300 million power traces measured from an FPGA implementation supports the security proofs.
Begül Bilgin, Benedikt Gierlichs, Svetla Nikova, Ventzislav Nikov, Vincent Rijmen
Masks Will Fall Off
Higher-Order Optimal Distinguishers
Abstract
Higher-order side-channel attacks are able to break the security of cryptographic implementations even if they are protected with masking countermeasures. In this paper, we derive the best possible distinguishers (High-Order Optimal Distinguishers or HOOD) against masking schemes under the assumption that the attacker can profile. Our exact derivation admits simple approximate expressions for high and low noise and shows to which extent the optimal distinguishers reduce to known attacks in the case where no profiling is possible. From these results, we can explain theoretically the empirical outcome of recent works on second-order distinguishers. In addition, we extend our analysis to any order and to the application to masked tables precomputation. Our results give some insight on which distinguishers have to be considered in the security analysis of cryptographic devices.
Nicolas Bruneau, Sylvain Guilley, Annelie Heuser, Olivier Rioul

Black-Box Separation

Black-Box Separations for One-More (Static) CDH and Its Generalization
Abstract
As one-more problems are widely used in both proving and analyzing the security of various cryptographic schemes, it is of fundamental importance to investigate the hardness of the one-more problems themselves. Bresson et al. (CT-RSA ’08) first showed that it is difficult to rely the hardness of some one-more problems on the hardness of their “regular” ones. Pass (STOC ’11) then gave a stronger black-box separation showing that the hardness of some one-more problems cannot be based on standard assumptions using black-box reductions. However, since previous works only deal with one-more problems whose solution can be efficiently checked, the relation between the hardness of the one-more (static) CDH problem over non-bilinear groups and other hard problems is still unclear. In this work, we give the first impossibility results showing that black-box reductions cannot be used to base the hardness of the one-more (static) CDH problem (over groups where the DDH problem is still hard) on any standard hardness assumption. Furthermore, we also extend the impossibility results to a class of generalized “one-more” problems, which not only subsume/strengthen many existing separations for traditional one-more problems, but also give new separations for many other interesting “one-more” problems.
Jiang Zhang, Zhenfeng Zhang, Yu Chen, Yanfei Guo, Zongyang Zhang
Black-Box Separations for Differentially Private Protocols
Abstract
We study the maximal achievable accuracy of distributed differentially private protocols for a large natural class of boolean functions, in the computational setting.
In the information theoretic model, McGregor et al.[FOCS 2010] and Goyal et al.[CRYPTO 2013] demonstrate several functionalities whose differentially private computation results in much lower accuracies in the distributed setting, as compared to the client-server setting.
We explore lower bounds on the computational assumptions under which this accuracy gap can possibly be reduced for two-party boolean output functions. In the distributed setting, it is possible to achieve optimal accuracy, i.e. the maximal achievable accuracy in the client-server setting, for any function, if a semi-honest secure protocol for oblivious transfer exists. However, we show the following strong impossibility results:
  • For any general boolean function and fixed level of privacy, the maximal achievable accuracy of any (fully) black-box construction based on existence of key-agreement protocols is at least a constant smaller than optimal achievable accuracy. Since key-agreement protocols imply the existence of one-way functions, this separation also extends to one-way functions.
  • Our results are tight for the AND and XOR functions. For AND, there exists an accuracy threshold such that any accuracy up to the threshold can be information theoretically achieved; while no (fully) black-box construction based on existence of key-agreement can achieve accuracy beyond this threshold. An analogous statement is also true for XOR (albeit with a different accuracy threshold).
Our results build on recent developments in black-box separation techniques for functions with private input [1,16,27,28]; and translate information theoretic impossibilities into black-box separation results.
Dakshita Khurana, Hemanta K. Maji, Amit Sahai

Composability

Composable Security of Delegated Quantum Computation
Abstract
Delegating difficult computations to remote large computation facilities, with appropriate security guarantees, is a possible solution for the ever/growing needs of personal computing power. For delegated computation protocols to be usable in a larger context – or simply to securely run two protocols in parallel – the security definitions need to be composable. Here, we define composable security for delegated quantum computation. We distinguish between protocols which provide only blindness – the computation is hidden from the server – and those that are also verifiable – the client can check that it has received the correct result. We show that the composable security definition capturing both these notions can be reduced to a combination of several distinct “trace/distance/type” criteria – which are, individually, non/composable security definitions.
Additionally, we study the security of some known delegated quantum computation protocols, including Broadbent, Fitzsimons and Kashefi’s Universal Blind Quantum Computation protocol. Even though these protocols were originally proposed with insufficient security criteria, they turn out to still be secure given the stronger composable definitions.
Vedran Dunjko, Joseph F. Fitzsimons, Christopher Portmann, Renato Renner
All-But-Many Encryption
A New Framework for Fully-Equipped UC Commitments
Abstract
We present a general framework for constructing non- interactive universally composable (UC) commitment schemes that are secure against adaptive adversaries in the non-erasure model under a re-usable common reference string. Previously, such “fully-equipped” UC commitment schemes have been known only in [5,6], with strict expansion factor O(κ); meaning that to commit λ bits, communication strictly requires O(λκ) bits, where κ denotes the security parameter. Efficient construction of a fully-equipped UC commitment scheme is a long-standing open problem. We introduce new abstraction, called all-but-many encryption (ABME), and prove that it captures a fully-equipped UC commitment scheme. We propose the first fully-equipped UC commitment scheme with optimal expansion factor Ω(1) from our ABME scheme related to the DCR assumption. We also provide an all-but-many lossy trapdoor function (ABM-LTF) [18] from our DCR-based ABME scheme, with a better lossy rate than [18].
Eiichiro Fujisaki

Multi-Party Computation

Multi-valued Byzantine Broadcast: The t < n Case
Abstract
Byzantine broadcast is a distributed primitive that allows a specific party to consistently distribute a message among n parties in the presence of potential misbehavior of up to t of the parties. All known protocols implementing broadcast of an ℓ-bit message from point-to-point channels tolerating any t < n Byzantine corruptions have communication complexity at least Ω(ℓn 2). In this paper we give cryptographically secure and information-theoretically secure protocols for t < n that communicate \(\mathcal{O}(\ell n)\) bits when ℓ is sufficiently large. This matches the optimal communication complexity bound for any protocol allowing to broadcast ℓ-bit messages. While broadcast protocols with the optimal communication complexity exist for t < n/2, this paper is the first to present such protocols for t < n.
Martin Hirt, Pavel Raykov
Fairness versus Guaranteed Output Delivery in Secure Multiparty Computation
Abstract
In the setting of secure multiparty computation, a set of parties wish to compute a joint function of their private inputs. The computation should preserve security properties such as privacy, correctness, independence of inputs, fairness and guaranteed output delivery. In the case of no honest majority, fairness and guaranteed output delivery cannot always be obtained. Thus, protocols for secure multiparty computation are typically of two disparate types: protocols that assume an honest majority (and achieve all properties including fairness and guaranteed output delivery), and protocols that do not assume an honest majority (and achieve all properties except for fairness and guaranteed output delivery). In addition, in the two-party case, fairness and guaranteed output delivery are equivalent. As a result, the properties of fairness (which means that if corrupted parties receive output then so do the honest parties) and guaranteed output delivery (which means that corrupted parties cannot prevent the honest parties from receiving output in any case) have typically been considered to be the same.
In this paper, we initiate a study of the relation between fairness and guaranteed output delivery in secure multiparty computation. We show that in the multiparty setting these properties are distinct and proceed to study under what conditions fairness implies guaranteed output delivery (the opposite direction always holds). We also show the existence of non-trivial functions for which complete fairness is achievable (without an honest majority) but guaranteed output delivery is not, and the existence of non-trivial functions for which complete fairness and guaranteed output delivery are achievable. Our study sheds light on the role of broadcast in fairness and guaranteed output delivery, and shows that these properties should sometimes be considered separately.
Ran Cohen, Yehuda Lindell
Actively Secure Private Function Evaluation
Abstract
We propose the first general framework for designing actively secure private function evaluation (PFE), not based on universal circuits. Our framework is naturally divided into pre-processing and online stages and can be instantiated using any generic actively secure multiparty computation (MPC) protocol.
Our framework helps address the main open questions about efficiency of actively secure PFE. On the theoretical side, our framework yields the first actively secure PFE with linear complexity in the circuit size. On the practical side, we obtain the first actively secure PFE for arithmetic circuits with O(g ·logg) complexity where g is the circuit size. The best previous construction (of practical interest) is based on an arithmetic universal circuit and has complexity O(g 5).
We also introduce the first linear Zero-Knowledge proof of correctness of “extended permutation” of ciphertexts (a generalization of ZK proof of correct shuffles) which maybe of independent interest.
Payman Mohassel, Saeed Sadeghian, Nigel P. Smart
Efficient, Oblivious Data Structures for MPC
Abstract
We present oblivious implementations of several data structures for secure multiparty computation (MPC) such as arrays, dictionaries, and priority queues. The resulting oblivious data structures have only polylogarithmic overhead compared with their classical counterparts. To achieve this, we give secure multiparty protocols for the ORAM of Shi et al. (Asiacrypt ‘11) and the Path ORAM scheme of Stefanov et al. (CCS ‘13), and we compare the resulting implementations. We subsequently use our oblivious priority queue for secure computation of Dijkstra’s shortest path algorithm on general graphs, where the graph structure is secret. To the best of our knowledge, this is the first implementation of a non-trivial graph algorithm in multiparty computation with polylogarithmic overhead.
We implemented and benchmarked most of our protocols using the SPDZ protocol of Damgård et al. (Crypto ‘12), which works in the preprocessing model and ensures active security against an adversary corrupting all but one players. For two parties, the online access time for an oblivious array of size one million is under 100 ms.
Marcel Keller, Peter Scholl
Backmatter
Metadaten
Titel
Advances in Cryptology – ASIACRYPT 2014
herausgegeben von
Palash Sarkar
Tetsu Iwata
Copyright-Jahr
2014
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-45608-8
Print ISBN
978-3-662-45607-1
DOI
https://doi.org/10.1007/978-3-662-45608-8