Skip to main content

2015 | Buch

Theory of Cryptography

12th Theory of Cryptography Conference, TCC 2015, Warsaw, Poland, March 23-25, 2015, Proceedings, Part I

herausgegeben von: Yevgeniy Dodis, Jesper Buus Nielsen

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 9014 and LNCS 9015 constitutes the refereed proceedings of the 12th International Conference on Theory of Cryptography, TCC 2015, held in Warsaw, Poland in March 2015.
The 52 revised full papers presented were carefully reviewed and
selected from 137 submissions. The papers are organized in topical
sections on foundations, symmetric key, multiparty computation,
concurrent and resettable security, non-malleable codes and tampering, privacy amplification, encryption an key exchange, pseudorandom functions and applications, proofs and verifiable computation, differential privacy, functional encryption, obfuscation.

Inhaltsverzeichnis

Frontmatter

Foundations

On Basing Size-Verifiable One-Way Functions on NP-Hardness
Abstract
We prove that if the hardness of inverting a size-verifiable one-way function can be based on NP-hardness via a general (adaptive) reduction, then NP ⊆ coAM. This claim was made by Akavia, Goldreich, Goldwasser, and Moshkovitz (STOC 2006), but was later retracted (STOC 2010).
Andrej Bogdanov, Christina Brzuska
The Randomized Iterate, Revisited - Almost Linear Seed Length PRGs from a Broader Class of One-Way Functions
Abstract
We revisit “the randomized iterate” technique that was originally used by Goldreich, Krawczyk, and Luby (SICOMP 1993) and refined by Haitner, Harnik and Reingold (CRYPTO 2006) in constructing pseudorandom generators (PRGs) from regular one-way functions (OWFs). We abstract out a technical lemma (which is folklore in leakage resilient cryptography), and use it to provide a simpler and more modular proof for the Haitner-Harnik-Reingold PRGs from regular OWFs.
We introduce a more general class of OWFs called “weakly-regular one-way functions” from which we construct a PRG of seed length O(n·logn). More specifically, consider an arbitrary one-way function f with range divided into sets \({\mathcal{Y}}_1\), \({\mathcal{Y}}_2\), …, \({\mathcal{Y}}_n\) where each \({\mathcal{Y}}_i\stackrel{\sf def}{=}\{y:2^{i-1}\le|f^{-1}(y)|<2^{i}\}\). We say that f is weakly-regular if there exists a (not necessarily efficient computable) cut-off point max such that \({\mathcal{Y}}_{\rm max}\) is of some noticeable portion (say n − c for constant c), and \({\mathcal{Y}}_{{\rm max}+1}\), …, \({\mathcal{Y}}_{n}\) only sum to a negligible fraction. We construct a PRG by making \(\tilde{O}(n^{2c+1})\) calls to f and achieve seed length O(n·logn) using bounded space generators. This generalizes the approach of Haitner et al., where regular OWFs fall into a special case for c = 0. We use a proof technique that is similar to and extended from the method by Haitner, Harnik and Reingold for hardness amplification of regular weakly-one-way functions.
Our work further explores the feasibility and limits of the “randomized iterate” type of black-box constructions. In particular, the underlying f can have an arbitrary structure as long as the set of images with maximal preimage size has a noticeable fraction. In addition, our construction is much more seed-length efficient and security-preserving (albeit less general) than the HILL-style generators where the best known construction by Vadhan and Zheng (STOC 2012) requires seed length \(\tilde{O}(n^3)\).
Yu Yu, Dawu Gu, Xiangxue Li, Jian Weng
The Power of Negations in Cryptography
Abstract
The study of monotonicity and negation complexity for Bool-ean functions has been prevalent in complexity theory as well as in computational learning theory, but little attention has been given to it in the cryptographic context. Recently, Goldreich and Izsak (2012) have initiated a study of whether cryptographic primitives can be monotone, and showed that one-way functions can be monotone (assuming they exist), but a pseudorandom generator cannot.
In this paper, we start by filling in the picture and proving that many other basic cryptographic primitives cannot be monotone. We then initiate a quantitative study of the power of negations, asking how many negations are required. We provide several lower bounds, some of them tight, for various cryptographic primitives and building blocks including one-way permutations, pseudorandom functions, small-bias generators, hard-core predicates, error-correcting codes, and randomness extractors. Among our results, we highlight the following.
  • Unlike one-way functions, one-way permutations cannot be monotone.
  • We prove that pseudorandom functions require logn − O(1) negations (which is optimal up to the additive term).
  • We prove that error-correcting codes with optimal distance parameters require logn − O(1) negations (again, optimal up to the additive term).
  • We prove a general result for monotone functions, showing a lower bound on the depth of any circuit with t negations on the bottom that computes a monotone function f in terms of the monotone circuit depth of f. This result addresses a question posed by Koroth and Sarma (2014) in the context of the circuit complexity of the Clique problem.
Siyao Guo, Tal Malkin, Igor C. Oliveira, Alon Rosen
From Weak to Strong Zero-Knowledge and Applications
Abstract
The notion of zero-knowledge [20] is formalized by requiring that for every malicious efficient verifier V *, there exists an efficient simulator S that can reconstruct the view of V * in a true interaction with the prover, in a way that is indistinguishable to every polynomial-time distinguisher. Weak zero-knowledge weakens this notions by switching the order of the quantifiers and only requires that for every distinguisher D, there exists a (potentially different) simulator S D .
In this paper we consider various notions of zero-knowledge, and investigate whether their weak variants are equivalent to their strong variants. Although we show (under complexity assumption) that for the standard notion of zero-knowledge, its weak and strong counterparts are not equivalent, for meaningful variants of the standard notion, the weak and strong counterparts are indeed equivalent. Towards showing these equivalences, we introduce new non-black-box simulation techniques permitting us, for instance, to demonstrate that the classical 2-round graph non-isomorphism protocol of Goldreich-Micali-Wigderson [18] satisfies a “distributional” variant of zero-knowledge.
Our equivalence theorem has other applications beyond the notion of zero-knowledge. For instance, it directly implies the dense model theorem of Reingold et al (STOC ’08), and the leakage lemma of Gentry-Wichs (STOC ’11), and provides a modular and arguably simpler proof of these results (while at the same time recasting these result in the language of zero-knowledge).
Kai-Min Chung, Edward Lui, Rafael Pass
An Efficient Transform from Sigma Protocols to NIZK with a CRS and Non-programmable Random Oracle
Abstract
In this short paper, we present a Fiat-Shamir type transform that takes any Sigma protocol for a relation R and outputs a non-interactive zero-knowledge proof (not of knowledge) for the associated language L R , in the common reference string model. As in the Fiat-Shamir transform, we use a hash function H. However, zero-knowledge is achieved under standard assumptions in the common reference string model (without any random oracle), and soundness is achieved in the non-programmable random oracle model. The concrete computational complexity of the transform is only slightly higher than the original Fiat-Shamir transform.
Yehuda Lindell

Symmetric Key

On the Indifferentiability of Key-Alternating Feistel Ciphers with No Key Derivation
Abstract
Feistel constructions have been shown to be indifferentiable from random permutations at STOC 2011. Whereas how to properly mix the keys into an un-keyed Feistel construction without appealing to domain separation technique to obtain a block cipher which is provably secure against known-key and chosen-key attacks (or to obtain an ideal cipher) remains an open problem. We study this, particularly the basic structure of NSA’s SIMON family of block ciphers. SIMON family takes a construction which has the subkey xored into a halve of the state at each round. More clearly, at the i-th round, the state is updated according to
$$(x_i,x_{i-1})\mapsto(x_{i-1}\oplus F_i(x_i)\oplus k_i,x_i)$$
For such key-alternating Feistel ciphers, we show that 21 rounds are sufficient to achieve indifferentiability from ideal ciphers with 2n-bit blocks and n-bit keys, assuming the n-to-n-bit round functions F 1,…,F 21 to be random and public and an identical user-provided n-bit key to be applied at each round. This gives an answer to the question mentioned before, which is the first to our knowledge.
Chun Guo, Dongdai Lin

Multiparty Computation

A Little Honesty Goes a Long Way
The Two-Tier Model for Secure Multiparty Computation
Abstract
A fundamental result in secure multiparty computation (MPC) is that in order to achieve full security, it is necessary that a majority of the parties behave honestly. There are settings, however, where the condition of an honest majority might be overly restrictive, and there is a need to define and investigate other plausible adversarial models in order to circumvent the above impossibility.
To this end, we introduce the two-tier model for MPC, where some small subset of servers is guaranteed to be honest at the beginning of the computation (the first tier), while the corruption state of the other servers (the second tier) is unknown. The two-tier model naturally arises in various settings, such as for example when a service provider wishes to utilize a large pre-existing set of servers, while being able to trust only a small fraction of them.
The first tier is responsible for performing the secure computation while the second tier serves as a disguise: using novel anonymization techniques, servers in the first tier remain undetected to an adaptive adversary, preventing a targeted attack on these critical servers. Specifically, given n servers and assuming αn of them are corrupt at the onset (where α ∈ (0,1)), we present an MPC protocol that can withstand an optimal amount of less than (1 − α)n/2 additional adaptive corruptions, provided the first tier is of size ω(logn). This allows us to perform MPC in a fully secure manner even when the total number of corruptions exceeds n/2 across both tiers, thus evading the honest majority requirement.
Juan A. Garay, Ran Gelles, David S. Johnson, Aggelos Kiayias, Moti Yung
Topology-Hiding Computation
Abstract
Secure Multi-party Computation (MPC) is one of the foundational achievements of modern cryptography, allowing multiple, distrusting, parties to jointly compute a function of their inputs, while revealing nothing but the output of the function. Following the seminal works of Yao and Goldreich, Micali and Wigderson and Ben-Or, Goldwasser and Wigderson, the study of MPC has expanded to consider a wide variety of questions, including variants in the attack model, underlying assumptions, complexity and composability of the resulting protocols.
One question that appears to have received very little attention, however, is that of MPC over an underlying communication network whose structure is, in itself, sensitive information. This question, in addition to being of pure theoretical interest, arises naturally in many contexts: designing privacy-preserving social-networks, private peer-to-peer computations, vehicle-to-vehicle networks and the “internet of things” are some of the examples.
In this paper, we initiate the study of “topology-hiding computation” in the computational setting. We give formal definitions in both simulation-based and indistinguishability-based flavors. We show that, even for fail-stop adversaries, there are some strong impossibility results. Despite this, we show that protocols for topology-hiding computation can be constructed in the semi-honest and fail-stop models, if we somewhat restrict the set of nodes the adversary may corrupt.
Tal Moran, Ilan Orlov, Silas Richelson
Secure Physical Computation Using Disposable Circuits
Abstract
In a secure physical computation, a set of parties each have physical inputs and jointly compute a function of their inputs in a way that reveals no information to any party except for the output of the function. Recent work in CRYPTO’14 presented examples of physical zero-knowledge proofs of physical properties, a special case of secure physical two-party computation in which one party has a physical input and the second party verifies a boolean function of that input. While the work suggested a general framework for modeling and analyzing physical zero-knowledge protocols, it did not provide a general theory of how to prove any physical property with zero-knowledge. This paper takes an orthogonal approach using disposable circuits (DC)—cheap hardware tokens that can be completely destroyed after a computation—an extension of the familiar tamper-proof token model. In the DC model, we demonstrate that two parties can compute any function of their physical inputs in a way that leaks at most 1 bit of additional information to either party. Moreover, our result generalizes to any multi-party physical computation. Formally, our protocols achieve unconditional UC-security with input-dependent abort.
Ben A. Fisch, Daniel Freund, Moni Naor
Complete Characterization of Fairness in Secure Two-Party Computation of Boolean Functions
Abstract
Fairness is a desirable property in secure computation; informally it means that if one party gets the output of the function, then all parties get the output. Alas, an implication of Cleve’s result (STOC 86) is that when there is no honest majority, in particular in the important case of the two-party setting, there exist Boolean functions that cannot be computed with fairness. In a surprising result, Gordon et al. (JACM 2011) showed that some interesting functions can be computed with fairness in the two-party setting, and re-opened the question of understanding which Boolean functions can be computed with fairness, and which cannot.
Our main result in this work is a complete characterization of the (symmetric) Boolean functions that can be computed with fairness in the two-party setting; this settles an open problem of Gordon et al. The characterization is quite simple: A function can be computed with fairness if and only if the all one-vector or the all-zero vector are in the affine span of either the rows or the columns of the matrix describing the function. This is true for both deterministic and randomized functions. To prove the possibility result, we modify the protocol of Gordon et al.; the resulting protocol computes with full security (and in particular with fairness) all functions that are computable with fairness.
We extend the above result in two directions. First, we completely characterize the Boolean functions that can be computed with fairness in the multiparty case, when the number of parties is constant and at most half of the parties can be malicious. Second, we consider the two-party setting with asymmetric Boolean functionalities, that is, when the output of each party is one bit; however, the outputs are not necessarily the same. We provide both a sufficient condition and a necessary condition for fairness; however, a gap is left between these two conditions. We then consider a specific asymmetric function in this gap area, and by designing a new protocol, we show that it is computable with fairness. However, we do not give a complete characterization for all functions that lie in this gap, and their classification remains open.
Gilad Asharov, Amos Beimel, Nikolaos Makriyannis, Eran Omri
Richer Efficiency/Security Trade-offs in 2PC
Abstract
The dual-execution protocol of Mohassel & Franklin (PKC 2006) is a highly efficient (each party garbling only one circuit) 2PC protocol that achieves malicious security apart from leaking an arbitrary, adversarially-chosen predicate about the honest party’s input. We present two practical and orthogonal approaches to improve the security of the dual-execution technique.
First, we show how to greatly restrict the predicate that an adversary can learn in the protocol, to a natural notion of “only computation leaks”-style leakage. Along the way, we identify a natural security property of garbled circuits called property-enforcing that may be of independent interest.
Second, we address a complementary direction of reducing the probability that the leakage occurs. We propose a new dual-execution protocol — with a very light cheating-detection phase and each party garbling s + 1 circuits — in which a cheating party learns a bit with probability only 2− s . Our concrete measurements show approximately 35% reduction in communication for the AES circuit, compared to the best combination of state of the art techniques for achieving the same security notion.
Combining the two results, we achieve a rich continuum of practical trade-offs between efficiency & security, connecting the covert, dual-execution and full-malicious guarantees.
Vladimir Kolesnikov, Payman Mohassel, Ben Riva, Mike Rosulek

Concurrent and Resettable Security

Round-Efficient Concurrently Composable Secure Computation via a Robust Extraction Lemma
Abstract
We consider the problem of constructing protocols for secure computation that achieve strong concurrent and composable notions of security in the plain model. Unfortunately UC-secure secure computation protocols are impossible in this setting, but the Angel-Based Composable Security notion offers a promising alternative. Until now, however, under standard (polynomial-time) assumptions, only protocols with polynomially many rounds were known to exist.
In this work, we give the first Õ(log n)-round secure computation protocol in the plain model that achieves angel-based composable security in the concurrent setting, under standard assumptions. We do so by constructing the first Õ(log n)-round CCA-secure commitment protocol. Our CCA-secure commitment protocol is secure based on the minimal assumption that one-way functions exist.
A central tool in obtaining our result is a new robust concurrent extraction lemma that we introduce and prove, based on the minimal assumptions that one-way functions exist. This robust concurrent extraction lemma shows how to build concurrent extraction procedures that work even in the context of an “external” protocol that cannot be rewound by the extractor. We believe this lemma can be used to simplify many existing works on concurrent security, and is of independent interest. In fact, our lemma when used in conjunction with the concurrentsimulation schedule of Pass and Venkitasubramaniam (TCC’08), also yields a constant round construction based additionally on the existence of quasi-polynomial time \(\mathcal (PQT)\) secure one-way functions.
Vipul Goyal, Huijia Lin, Omkant Pandey, Rafael Pass, Amit Sahai
An Alternative Approach to Non-black-box Simulation in Fully Concurrent Setting
Abstract
We give a new proof of the existence of public-coin concurrent zero-knowledge arguments for \(\mathcal{NP}\) in the plain model under standard assumptions (the existence of one-to-one one-way functions and collision-resistant hash functions), which was originally proven by Goyal (STOC’13).
In the proof, we use a new variant of the non-black-box simulation technique of Barak (FOCS’01). An important property of our simulation technique is that the simulator runs in a straight-line manner in the fully concurrent setting. Compared with the simulation technique of Goyal, which also has such a property, the analysis of our simulation technique is (arguably) simpler.
Susumu Kiyoshima
General Statistically Secure Computation with Bounded-Resettable Hardware Tokens
Abstract
Universally composable secure computation was assumed to require trusted setups, until it was realized that parties exchanging (untrusted) tamper-proof hardware tokens allow an alternative approach (Katz; EUROCRYPT 2007). This discovery initialized a line of research dealing with two different types of tokens. Using only a single stateful token, one can implement general statistically secure two-party computation (Döttling, Kraschewski, Müller-Quade; TCC 2011); though all security is lost if an adversarial token receiver manages to physically reset and rerun the token. Stateless tokens, which are secure by definition against any such resetting-attacks, however, do provably not suffice for statistically secure computation in general (Goyal, Ishai, Mahmoody, Sahai; CRYPTO 2010).
We investigate the natural question of what is possible if an adversary can reset a token at most a bounded number of times (e.g., because each resetting attempt imposes a significant risk to trigger a self-destruction mechanism of the token). Somewhat surprisingly, our results come close to the known positive results with respect to non-resettable stateful tokens. In particular, we construct polynomially many instances of statistically secure and universally composable oblivious transfer, using only a constant number of tokens. Our techniques have some abstract similarities to previous solutions, which we grasp by defining a new security property for protocols that use oracle access. Additionally, we apply our techniques to zero-knowledge proofs and obtain a protocol that achieves the same properties as bounded-query zero-knowledge PCPs (Kilian, Petrank, Tardos; STOC 1997), even if a malicious prover may issue stateful PCP oracles.
Nico Döttling, Daniel Kraschewski, Jörn Müller-Quade, Tobias Nilges
Resettably Sound Zero-Knowledge Arguments from OWFs - The (Semi) Black-Box Way
Abstract
We construct a constant round resettably-sound zero knowledge argument of knowledge based on black-box use of any one-way function. Resettable-soundness was introduced by Barak, Goldreich, Goldwasser and Lindell [FOCS 01] and is a strengthening of the soundness requirement in interactive proofs demanding that soundness should hold even if the malicious prover is allowed to “reset” and “restart” the verifier. In their work they show that resettably-sound ZK arguments require nonblack- box simulation techniques, and also provide the first construction based on the breakthrough simulation technique of Barak [FOCS 01]. All known implementations of Barak’s non-black-box technique required non-black-box use of a collision-resistance hash-function (CRHF).
Very recently, Goyal, Ostrovsky, Scafuro and Visconti [STOC 14] showed an implementation of Barak’s technique that needs only blackbox access to a collision-resistant hash-function while still having a nonblack- box simulator. (Such a construction is referred to as semi blackbox.) Plugging this implementation in the compiler due to Barak et al. yields the first resettably-sound ZK arguments based on black-box use of CRHFs.
However, from the work of Chung, Pass and Seth [STOC 13] and Bitansky and Paneth [STOC 13], we know that resettably-sound ZK arguments can be constructed from non-black-box use of any one-way function (OWF), which is the minimal assumption for ZK arguments.
Hence, anatural question iswhether it ispossible to construct resettablysound zero-knowledge arguments from black-box use of any OWF only. In this work we provide a positive answer to this question thus closing the gap between black-box and non-black-box constructions for resettably-sound ZK arguments.
Rafail Ostrovsky, Alessandra Scafuro, Muthuramakrishnan Venkitasubramanian

Non-malleable Codes and Tampering

A Rate-Optimizing Compiler for Non-malleable Codes Against Bit-Wise Tampering and Permutations
Abstract
A non-malleable code protects messages against a class of tampering functions. Informally, a code is non-malleable if the effect of applying any tampering function on an encoded message is to either retain the message or to replace it with an unrelated message. Two main challenges in this area – apart from establishing the feasibility against different families of tampering – are to obtain explicit constructions and to obtain high-rates for such constructions.
In this work, we present a compiler to transform low-rate (in fact, zero rate) non-malleable codes against certain class of tampering into an optimal-rate – i.e., rate 1 – non-malleable codes against the same class. If the original code is explicit, so is the new one.
When applied to the family of bit-wise tampering functions, this subsumes (and greatly simplifies) a recent result of Cheraghchi and Guruswami (TCC 2014). Further, our compiler can be applied to non-malleable codes against the class of bit-wise tampering and bit-level permutations. Combined with the rate-0 construction in a companion work, this yields the first explicit rate-1 non-malleable code for this family of tampering functions.
Our compiler uses a new technique for boot-strapping non-malleability by introducing errors, that may be of independent interest.
Shashank Agrawal, Divya Gupta, Hemanta K. Maji, Omkant Pandey, Manoj Prabhakaran
Leakage-Resilient Non-malleable Codes
Abstract
A recent trend in cryptography is to construct cryptosystems that are secure against physical attacks. Such attacks are usually divided into two classes: the leakage attacks in which the adversary obtains some information about the internal state of the machine, and the tampering attacks where the adversary can modify this state. One of the popular tools used to provide tamper-resistance are the non-malleable codes introduced by Dziembowski, Pietrzak and Wichs (ICS 2010). These codes can be defined in several variants, but arguably the most natural of them are the information-theoretically secure codes in the k-split-state model (the most desired case being k = 2).
Such codes were constucted recently by Aggarwal et al. (STOC 2014). Unfortunately, unlike the earlier, computationally-secure constructions (Liu and Lysyanskaya, CRYPTO 2012) these codes are not known to be resilient to leakage. This is unsatisfactory, since in practice one always aims at providing resilience against both leakage and tampering (especially considering tampering without leakage is problematic, since the leakage attacks are usually much easier to perform than the tampering attacks).
In this paper we close this gap by showing a non-malleable code in the 2-split state model that is secure against leaking almost a 1/12-th fraction of the bits from the codeword (in the bounded-leakage model). This is achieved via a generic transformation that takes as input any non-malleable code (Enc,Dec) in the 2-split state model, and constructs out of it another non-malleable code (Enc’,Dec’) in the 2-split state model that is additionally leakage-resilient. The rate of (Enc’,Dec’) is linear in the rate of (Enc, Dec). Our construction requires that Dec is symmetric, i.e., for all x, y, it is the case that Dec(x,y) = Dec(y,x), but this property holds for all currently known information-theoretically secure codes in the 2-split state model. In particular, we can apply our transformation to the code of Aggarwal et al., obtaining the first leakage-resilient code secure in the split-state model. Our transformation can be applied to other codes (in particular it can also be applied to a recent code of Aggarwal, Dodis, Kazana and Obremski constructed in the work subsequent to this one).
Divesh Aggarwal, Stefan Dziembowski, Tomasz Kazana, Maciej Obremski
Locally Decodable and Updatable Non-malleable Codes and Their Applications
Abstract
Non-malleable codes, introduced as a relaxation of error-correcting codes by Dziembowski, Pietrzak and Wichs (ICS ’10), provide the security guarantee that the message contained in a tampered codeword is either the same as the original message or is set to an unrelated value. Various applications of non-malleable codes have been discovered, and one of the most significant applications among these is the connection with tamper-resilient cryptography. There is a large body of work considering security against various classes of tampering functions, as well as non-malleable codes with enhanced features such as leakage resilience.
In this work, we propose combining the concepts of non-malleability, leakage resilience, and locality in a coding scheme. The contribution of this work is three-fold:
1
As a conceptual contribution, we define a new notion of locally decodable and updatable non-malleable code that combines the above properties.
 
2
We present two simple and efficient constructions achieving our new notion with different levels of security.
 
3
We present an important application of our new tool – securing RAM computation against memory tampering and leakage attacks. This is analogous to the usage of traditional non-malleable codes to secure implementations in the circuit model against memory tampering and leakage attacks.
 
Dana Dachman-Soled, Feng-Hao Liu, Elaine Shi, Hong-Sheng Zhou
Tamper Detection and Continuous Non-malleable Codes
Abstract
WeN consider a public and keyless code (Enc,Dec) which is used to encode a message m and derive a codeword c = Enc(m). The codeword can be adversarially tampered via a function \(f \in{\mathcal F}\) from some “tampering function family” \(\mathcal F\), resulting in a tampered value c′ = f(c). We study the different types of security guarantees that can be achieved in this scenario for different families \(\mathcal{F}\) of tampering attacks.
Firstly, we initiate the general study of tamper-detection codes, which must detect that tampering occurred and output Dec \((c') = \bot\). We show that such codes exist for any family of functions \({\mathcal F}\) over n bit codewords, as long as \(|{\mathcal F}| < 2^{2^n}\) is sufficiently smaller than the set of all possible functions, and the functions \(f \in{\mathcal F}\) are further restricted in two ways: (1) they can only have a few fixed points x such that f(x) = x, (2) they must have high entropy of f(x) over a random x. Such codes can also be made efficient when \(|\mathcal{F}| = 2^{{\rm poly(n)}}\).
Next, we revisit non-malleable codes, which were introduced by Dziembowski, Pietrzak and Wichs (ICS ’10) and require that Dec(c′) either decodes to the original message m, or to some unrelated value (possibly \(\bot\)) that doesn’t provide any information about m. We give a modular construction of non-malleable codes by combining tamper-detection codes and leakage-resilient codes. The resulting construction matches that of Faust et al. (EUROCRYPT ’14) but has a more modular proof and improved parameters.
Finally, we initiate the general study of continuous non-malleable codes, which provide a non-malleability guarantee against an attacker that can tamper a codeword multiple times. We define several variants of the problem depending on: (I) whether tampering is persistent and each successive attack modifies the codeword that has been modified by previous attacks, or whether tampering is non-persistent and is always applied to the original codeword, (II) whether we can “self-destruct” and stop the experiment if a tampered codeword is ever detected to be invalid or whether the attacker can always tamper more. In the case of persistent tampering and self-destruct (weakest case), we get a broad existence results, essentially matching what’s known for standard non-malleable codes. In the case of non-persistent tampering and no self-destruct (strongest case), we must further restrict the tampering functions to have few fixed points and high entropy. The two intermediate cases correspond to requiring only one of the above two restrictions.
Zahra Jafargholi, Daniel Wichs
Optimal Algebraic Manipulation Detection Codes in the Constant-Error Model
Abstract
Algebraic manipulation detection (AMD) codes, introduced at EUROCRYPT 2008, may, in some sense, be viewed as keyless combinatorial authentication codes that provide security in the presence of an oblivious, algebraic attacker. Its original applications included robust fuzzy extractors, secure message transmission and robust secret sharing. In recent years, however, a rather diverse array of additional applications in cryptography has emerged. In this paper we consider, for the first time, the regime of arbitrary positive constant error probability ε in combination with unbounded cardinality M of the message space. There are several applications where this model makes sense. Adapting a known bound to this regime, it follows that the binary length ρ of the tag satisfies ρ ≥ loglogM + Ω ε (1). In this paper, we shall call AMD codes meeting this lower bound optimal. Known constructions, notably a construction based on dedicated polynomial evaluation codes, are a multiplicative factor 2 off from being optimal. By a generic enhancement using error-correcting codes, these parameters can be further improved but remain suboptimal. Reaching optimality efficiently turns out to be surprisingly nontrivial. We propose a novel constructive method based on symmetries of codes. This leads to an explicit construction based on certain BCH codes that improves the parameters of the polynomial construction and to an efficient randomized construction of optimal AMD codes based on certain quasi-cyclic codes. In all our results, the error probability ε can be chosen as an arbitrarily small positive real number.
Ronald Cramer, Carles Padró, Chaoping Xing

Privacy Amplification

Non-malleable Condensers for Arbitrary Min-entropy, and Almost Optimal Protocols for Privacy Amplification
Abstract
Recently, the problem of privacy amplification with an active adversary has received a lot of attention. Given a shared n-bit weak random source X with min-entropy k and a security parameter s, the main goal is to construct an explicit 2-round privacy amplification protocol that achieves entropy loss O(s). Dodis and Wichs [1] showed that optimal protocols can be achieved by constructing explicit non-malleable extractors. However, the best known explicit non-malleable extractor only achieves k = 0.49n [2] and evidence in [2] suggests that constructing explicit non-malleable extractors for smaller min-entropy may be hard. In an alternative approach, Li [3] introduced the notion of a non-malleable condenser and showed that explicit non-malleable condensers also give optimal privacy amplification protocols.
In this paper, we give the first construction of non-malleable condensers for arbitrary min-entropy. Using our construction, we obtain a 2-round privacy amplification protocol with optimal entropy loss for security parameter up to \(s=\Omega(\sqrt{k})\). This is the first protocol that simultaneously achieves optimal round complexity and optimal entropy loss for arbitrary min-entropy k. We also generalize this result to obtain a protocol that runs in \(O(s/\sqrt{k})\) rounds with optimal entropy loss, for security parameter up to s = Ω(k). This significantly improves the protocol in [4]. Finally, we give a better non-malleable condenser for linear min-entropy, and in this case obtain a 2-round protocol with optimal entropy loss for security parameter up to s = Ω(k), which improves the entropy loss and communication complexity of the protocol in [2].
Xin Li

Encryption and Key Exchange

From Single-Bit to Multi-bit Public-Key Encryption via Non-malleable Codes
Abstract
One approach towards basing public-key encryption (PKE) schemes on weak and credible assumptions is to build “stronger” or more general schemes generically from “weaker” or more restricted ones. One particular line of work in this context was initiated by Myers and shelat (FOCS ’09) and continued by Hohenberger, Lewko, and Waters (Eurocrypt ’12), who provide constructions of multi-bit CCA-secure PKE from single-bit CCA-secure PKE.
It is well-known that encrypting each bit of a plaintext string independently is not chosen-ciphertext secure—the resulting scheme is malleable. This paper analyzes the conceptually simple approach of applying a suitable non-malleable code (Dziembowski et al., ICS ’10) to the plaintext and subsequently encrypting the resulting codeword bit-by-bit. An attacker’s ability to make multiple decryption queries requires that the underlying code be continuously non-malleable (Faust et al., TCC ’14). This flavor of non-malleable codes can only be achieved if the decoder is allowed to “self-destruct” when it processes an invalid encoding. The resulting PKE scheme inherits this property and therefore only achieves a weaker variant of chosen-ciphertext security, where the decryption becomes dysfunctional once the attacker submits an invalid ciphertext.
We first show that the above approach based on non-malleable codes indeed yields a solution to the problem of domain extension for publickey encryption where the decryption may self-destruct, provided that the underlying code is continuously non-malleable against a reduced form of bit-wise tampering. This statement is shown by combining a simple information-theoretic argument with the constructive cryptography perspective on PKE (Coretti et al., Asiacrypt ’13). Then, we prove that the code of Dziembowski et al. is actually already continuously non-malleable against (full) bit-wise tampering; this constitutes the first informationtheoretically secure continuously non-malleable code, a technical contribution that we believe is of independent interest. Compared to the previous approaches to PKE domain extension, our scheme is more efficient and intuitive, at the cost of not achieving full CCA security. Our result is also one of the first applications of non-malleable codes in a context other than memory tampering.
Sandro Coretti, Ueli Maurer, Björn Tackmann, Daniele Venturi
Constructing and Understanding Chosen Ciphertext Security via Puncturable Key Encapsulation Mechanisms
Abstract
In this paper, we introduce and study a new cryptographic primitive that we call puncturable key encapsulation mechanism (PKEM), which is a special class of KEMs that satisfy some functional and security requirements that, combined together, imply chosen ciphertext security (CCA security). The purpose of introducing this primitive is to capture certain common patterns in the security proofs of the several existing CCA secure public key encryption (PKE) schemes and KEMs based on general cryptographic primitives which (explicitly or implicitly) use the ideas and techniques of the Dolev-Dwork-Naor (DDN) construction (STOC’91), and ‘‘break down” the proofs into smaller steps, so that each small step is easier to work with/verify/understand than directly tackling CCA security.
To see the usefulness of PKEM, we show (1) how several existing constructions of CCA secure PKE/KEM constructed based on general cryptographic primitives can be captured as a PKEM, which enables us to understand these constructions via a unified framework, (2) their connection to detectable CCA security (Hohenberger et al. EUROCRYPT’12), and (3) a new security proof for a KEM-analogue of the DDN construction from a set of assumptions: sender non-committing encryption (SNCE) and non-interactive witness indistinguishable proofs.
Then, as our main technical result, we show how to construct a PKEM satisfying our requirements (and thus a CCA secure KEM) from a new set of general cryptographic primitives: SNCE and symmetric key encryption secure for key-dependent messages (KDM secure SKE). Our construction realizes the ‘‘decrypt-then-re-encrypt”-style validity check of a ciphertext which is powerful but in general has a problem of the circularity between a plaintext and a randomness. We show how SNCE and KDM secure SKE can be used together to overcome the circularity. We believe that the connection among three seemingly unrelated notions of encryption primitives, i.e. CCA security, the sender non-committing property, and KDM security, to be of theoretical interest.
Takahiro Matsuda, Goichiro Hanaoka
Non-committing Encryption from Φ-hiding
Abstract
A multiparty computation protocol is said to be adaptively secure if it retains its security even in the presence of an adversary who can corrupt participants as the protocol proceeds. This is in contrast to the static corruption model where the adversary is forced to choose which participants to corrupt before the protocol begins.
A central tool for constructing adaptively secure protocols is noncommitting encryption (Canetti, Feige, Goldreich and Naor, STOC ’96). The original protocol of Canetti et al. had ciphertext expansion that was quadratic in the security parameter, and prior to this work, the best known constructions had ciphertext expansion that was linear in the security parameter. In this work, we present the first non-committing encryption scheme that achieves ciphertext expansion that is logarithmic in the message length.
Our construction has optimal round complexity (2-rounds), where (just as in all previous constructions) the first message consists of a public-key of size \(\tilde{\mathcal O}(n\lambda)\) where n is the message length and λ is the security parameter. The second message consists of a ciphertext of size \({\mathcal O}( n \log n + \lambda )\). The security of our scheme is proved based on the Φ-hiding problem.
Brett Hemenway, Rafail Ostrovsky, Alon Rosen
On the Regularity of Lossy RSA
Improved Bounds and Applications to Padding-Based Encryption
Abstract
We provide new bounds on how close to regular the map xx e is on arithmetic progressions in ℤ N , assuming e|Φ(N) and N is composite. We use these bounds to analyze the security of natural cryptographic problems related to RSA, based on the well-studied Φ-Hiding assumption. For example, under this assumption, we show that RSA PKCS #1 v1.5 is secure against chosen-plaintext attacks for messages of length roughly \(\frac{\log N}{4}\) bits, whereas the previous analysis, due to [19], applies only to messages of length less than \(\frac{\log N}{32}\).
In addition to providing new bounds, we also show that a key lemma of [19] is incorrect. We prove a weaker version of the claim which is nonetheless sufficient for most, though not all, of their applications.
Our technical results can be viewed as showing that exponentiation in ℤ N is a deterministic extractor for every source that is uniform on an arithmetic progression. Previous work showed this type of statement only on average over a large class of sources, or for much longer progressions (that is, sources with much more entropy).
Adam Smith, Ye Zhang
Tightly-Secure Authenticated Key Exchange
Abstract
We construct the first Authenticated Key Exchange (AKE) protocol whose security does not degrade with an increasing number of users or sessions. We describe a three-message protocol and prove security in an enhanced version of the classical Bellare-Rogaway security model.
Our construction is modular, it can be instantiated efficiently from standard assumptions (such as the SXDH or DLIN assumptions in pairing-friendly groups). For instance, we provide an SXDH-based protocol with only 14 group elements and 4 exponents communication complexity (plus some bookkeeping information).
Along the way we develop new, stronger security definitions for digital signatures and key encapsulation mechanisms. For instance, we introduce a security model for digital signatures that provides existential unforgeability under chosen-message attacks in a multi-user setting with adaptive corruptions of secret keys. We show how to construct efficient schemes that satisfy the new definitions with tight security proofs under standard assumptions.
Christoph Bader, Dennis Hofheinz, Tibor Jager, Eike Kiltz, Yong Li
Backmatter
Metadaten
Titel
Theory of Cryptography
herausgegeben von
Yevgeniy Dodis
Jesper Buus Nielsen
Copyright-Jahr
2015
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-46494-6
Print ISBN
978-3-662-46493-9
DOI
https://doi.org/10.1007/978-3-662-46494-6

Premium Partner