Skip to main content
main-content

Über dieses Buch

The two-volume set LNCS 8269 and 8270 constitutes the refereed proceedings of the 19th International Conference on the Theory and Application of Cryptology and Information, Asiacrypt 2013, held in Bengaluru, India, in December 2013. The 54 revised full papers presented were carefully selected from 269 submissions. They are organized in topical sections named: zero-knowledge, algebraic cryptography, theoretical cryptography, protocols, symmetric key cryptanalysis, symmetric key cryptology: schemes and analysis, side-channel cryptanalysis, message authentication codes, signatures, cryptography based upon physical assumptions, multi-party computation, cryptographic primitives, analysis, cryptanalysis and passwords, leakage-resilient cryptography, two-party computation, hash functions.

Inhaltsverzeichnis

Frontmatter

Message Authentication Codes

New Generic Attacks against Hash-Based MACs

Abstract
In this paper we study the security of hash-based MAC algorithms (such as HMAC and NMAC) above the birthday bound. Up to the birthday bound, HMAC and NMAC are proven to be secure under reasonable assumptions on the hash function. On the other hand, if an n-bit MAC is built from a hash function with a l-bit state (l ≥ n), there is a well-known existential forgery attack with complexity 2 l/2. However, the remaining security after 2 l/2 computations is not well understood. In particular it is widely assumed that if the underlying hash function is sound, then a generic universal forgery attack should require 2 n computations and some distinguishing (e.g. distinguishing-H but not distinguishing-R) and state-recovery attacks should also require 2 l computations (or 2 k if k < l).
In this work, we show that above the birthday bound, hash-based MACs offer significantly less security than previously believed. Our main result is a generic distinguishing-H and state-recovery attack against hash-based MACs with a complexity of only \(\tilde O(2^{l/2})\). In addition, we show a key-recovery attack with complexity \(\tilde O(2^{3l/4})\) against HMAC used with a hash functions with an internal checksum, such as GOST. This surprising result shows that the use of a checksum might actually weaken a hash function when used in a MAC. We stress that our attacks are generic, and they are in fact more efficient than some previous attacks proposed on MACs instanciated with concrete hash functions.
We use techniques similar to the cycle-detection technique proposed by Peyrin et al. at Asiacrypt 2012 to attack HMAC in the related-key model. However, our attacks works in the single-key model for both HMAC and NMAC, and without restriction on the key size.
Gaëtan Leurent, Thomas Peyrin, Lei Wang

Cryptanalysis of HMAC/NMAC-Whirlpool

Abstract
In this paper, we present universal forgery and key recovery attacks on the most popular hash-based MAC constructions, e.g., HMAC and NMAC, instantiated with an AES-like hash function Whirlpool. These attacks work with Whirlpool reduced to 6 out of 10 rounds in single-key setting. To the best of our knowledge, this is the first result on “original” key recovery for HMAC (previous works only succeeded in recovering the equivalent keys). Interestingly, the number of attacked rounds is comparable with that for collision and preimage attacks on Whirlpool hash function itself. Lastly, we present a distinguishing-H attack against the full HMAC- and NMAC-Whirlpool.
Jian Guo, Yu Sasaki, Lei Wang, Shuang Wu

Signatures

Lattice-Based Group Signatures with Logarithmic Signature Size

Abstract
Group signatures are cryptographic primitives where users can anonymously sign messages in the name of a population they belong to. Gordon et al. (Asiacrypt 2010) suggested the first realization of group signatures based on lattice assumptions in the random oracle model. A significant drawback of their scheme is its linear signature size in the cardinality N of the group. A recent extension proposed by Camenisch et al. (SCN 2012) suffers from the same overhead. In this paper, we describe the first lattice-based group signature schemes where the signature and public key sizes are essentially logarithmic in N (for any fixed security level). Our basic construction only satisfies a relaxed definition of anonymity (just like the Gordon et al. system) but readily extends into a fully anonymous group signature (i.e., that resists adversaries equipped with a signature opening oracle). We prove the security of our schemes in the random oracle model under the SIS and LWE assumptions.
Fabien Laguillaumie, Adeline Langlois, Benoît Libert, Damien Stehlé

The Fiat–Shamir Transformation in a Quantum World

Abstract
The Fiat-Shamir transformation is a famous technique to turn identification schemes into signature schemes. The derived scheme is provably secure in the random-oracle model against classical adversaries. Still, the technique has also been suggested to be used in connection with quantum-immune identification schemes, in order to get quantum-immune signature schemes. However, a recent paper by Boneh et al. (Asiacrypt 2011) has raised the issue that results in the random-oracle model may not be immediately applicable to quantum adversaries, because such adversaries should be allowed to query the random oracle in superposition. It has been unclear if the Fiat-Shamir technique is still secure in this quantum oracle model (QROM).
Here we discuss that giving proofs for the Fiat-Shamir transformation in the QROM is presumably hard. We show that there cannot be black-box extractors, as long as the underlying quantum-immune identification scheme is secure against active adversaries and the first message of the prover is independent of its witness. Most schemes are of this type. We then discuss that for some schemes one may be able to resurrect the Fiat-Shamir result in the QROM by modifying the underlying protocol first. We discuss in particular a version of the Lyubashevsky scheme which is provably secure in the QROM.
Özgür Dagdelen, Marc Fischlin, Tommaso Gagliardoni

On the Security of One-Witness Blind Signature Schemes

Abstract
Blind signatures have proved an essential building block for applications that protect privacy while ensuring unforgeability, i.e., electronic cash and electronic voting. One of the oldest, and most efficient blind signature schemes is the one due to Schnorr that is based on his famous identification scheme. Although it was proposed over twenty years ago, its unforgeability remains an open problem, even in the random-oracle model. In this paper, we show that current techniques for proving security in the random oracle model do not work for the Schnorr blind signature by providing a meta-reduction which we call “personal nemesis adversary”. Our meta-reduction is the first one that does not need to reset the adversary and can also rule out reductions to interactive assumptions. Our results generalize to other important blind signatures, such as the one due to Brands. Brands’ blind signature is at the heart of Microsoft’s newly implemented UProve system, which makes this work relevant to cryptographic practice as well.
Foteini Baldimtsi, Anna Lysyanskaya

Cryptography Based Upon Physical Assumptions

Unconditionally Secure and Universally Composable Commitments from Physical Assumptions

Abstract
We present a constant-round unconditional black-box compiler that transforms any ideal (i.e., statistically-hiding and statistically-binding) straight-line extractable commitment scheme, into an extractable and equivocal commitment scheme, therefore yielding to UC-security [9]. We exemplify the usefulness of our compiler by providing two (constant-round) instantiations of ideal straight-line extractable commitment based on (malicious) PUFs [36] and stateless tamper-proof hardware tokens [26], therefore achieving the first unconditionally UC-secure commitment with malicious PUFs and stateless tokens, respectively. Our constructions are secure for adversaries creating arbitrarily malicious stateful PUFs/tokens.
Previous results with malicious PUFs used either computational assumptions to achieve UC-secure commitments or were unconditionally secure but only in the indistinguishability sense [36]. Similarly, with stateless tokens, UC-secure commitments are known only under computational assumptions [13,24,15], while the (not UC) unconditional commitment scheme of [23] is secure only in a weaker model in which the adversary is not allowed to create stateful tokens.
Besides allowing us to prove feasibility of unconditional UC-security with (malicious) PUFs and stateless tokens, our compiler can be instantiated with any ideal straight-line extractable commitment scheme, thus allowing the use of various setup assumptions which may better fit the application or the technology available.
Ivan Damgård, Alessandra Scafuro

Functional Encryption from (Small) Hardware Tokens

Abstract
Functional encryption (FE) enables fine-grained access control of encrypted data while promising simplified key management. In the past few years substantial progress has been made on functional encryption and a weaker variant called predicate encryption. Unfortunately, fundamental impossibility results have been demonstrated for constructing FE schemes for general functions satisfying a simulation-based definition of security.
We show how to use hardware tokens to overcome these impossibility results. In our envisioned scenario, an authority gives a hardware token and some cryptographic information to each authorized user; the user combines these to decrypt received ciphertexts. Our schemes rely on stateless tokens that are identical for all users. (Requiring a different token for each user trivializes the problem, and would be a barrier to practical deployment.) The tokens can implement relatively “lightweight” computation relative to the functions supported by the scheme.
Our token-based approach can be extended to support hierarchal functional encryption, function privacy, and more.
Kai-Min Chung, Jonathan Katz, Hong-Sheng Zhou

Bounded Tamper Resilience: How to Go beyond the Algebraic Barrier

Abstract
Related key attacks (RKAs) are powerful cryptanalytic attacks where an adversary can change the secret key and observe the effect of such changes at the output. The state of the art in RKA security protects against an a-priori unbounded number of certain algebraic induced key relations, e.g., affine functions or polynomials of bounded degree. In this work, we show that it is possible to go beyond the algebraic barrier and achieve security against arbitrary key relations, by restricting the number of tampering queries the adversary is allowed to ask for. The latter restriction is necessary in case of arbitrary key relations, as otherwise a generic attack of Gennaro et al. (TCC 2004) shows how to recover the key of almost any cryptographic primitive. We describe our contributions in more detail below.
1
We show that standard ID and signature schemes constructed from a large class of Σ-protocols (including the Okamoto scheme, for instance) are secure even if the adversary can arbitrarily tamper with the prover’s state a bounded number of times and obtain some bounded amount of leakage. Interestingly, for the Okamoto scheme we can allow also independent tampering with the public parameters.
 
2
We show a bounded tamper and leakage resilient CCA secure public key cryptosystem based on the DDH assumption. We first define a weaker CPA-like security notion that we can instantiate based on DDH, and then we give a general compiler that yields CCA-security with tamper and leakage resilience. This requires a public tamper-proof common reference string.
 
3
Finally, we explain how to boost bounded tampering and leakage resilience (as in 1. and 2. above) to continuous tampering and leakage resilience, in the so-called floppy model where each user has a personal hardware token (containing leak- and tamper-free information) which can be used to refresh the secret key.
 
We believe that bounded tampering is a meaningful and interesting alternative to avoid known impossibility results and can provide important insights into the security of existing standard cryptographic schemes.
Ivan Damgård, Sebastian Faust, Pratyay Mukherjee, Daniele Venturi

Tamper Resilient Circuits: The Adversary at the Gates

Abstract
We initiate the investigation of gate-tampering attacks against cryptographic circuits. Our model is motivated by the plausibility of tampering directly with circuit gates and by the increasing use of tamper resilient gates among the known constructions that are shown to be resilient against wire-tampering adversaries. We prove that gate-tampering is strictly stronger than wire-tampering. On the one hand, we show that there is a gate-tampering strategy that perfectly simulates any given wire-tampering strategy. On the other, we construct families of circuits over which it is impossible for any wire-tampering attacker to simulate a certain gate-tampering attack (that we explicitly construct). We also provide a tamper resilience impossibility result that applies to both gate and wire tampering adversaries and relates the amount of tampering to the depth of the circuit. Finally, we show that defending against gate-tampering attacks is feasible by appropriately abstracting and analyzing the circuit compiler of Ishai et al. [18] in a manner which may be of independent interest. Specifically, we first introduce a class of compilers that, assuming certain well defined tamper resilience characteristics against a specific class of attackers, can be shown to produce tamper resilient circuits against that same class of attackers. Then, we describe a compiler in this class for which we prove that it possesses the necessary tamper-resilience characteristics against gate-tampering attackers.
Aggelos Kiayias, Yiannis Tselekounis

Multi-Party Computation

Efficient General-Adversary Multi-Party Computation

Abstract
Secure multi-party computation (MPC) allows a set \(\mathcal{P}\) of n players to evaluate a function f in presence of an adversary who corrupts a subset of the players. In this paper we consider active, general adversaries, characterized by a so-called adversary structure \(\mathcal{Z}\) which enumerates all possible subsets of corrupted players. In particular for small sets of players general adversaries better capture real-world requirements than classical threshold adversaries.
Protocols for general adversaries are “efficient” in the sense that they require \(|\mathcal{Z}|^{\mathcal{O}(1)}\) bits of communication. However, as \(|\mathcal{Z}|\) is usually very large (even exponential in n), the exact exponent is very relevant. In the setting with perfect security, the most efficient protocol known to date communicates \(\mathcal{O}(|\mathcal{Z}|^3\)) bits; we present a protocol for this setting which communicates \(\mathcal{O}(|\mathcal{Z}|^2\)) bits. In the setting with statistical security, \(\mathcal{O}(|\mathcal{Z}|^3\)) bits of communication is needed in general (whereas for a very restricted subclass of adversary structures, a protocol with communication \(\mathcal{O}(|\mathcal{Z}|^2\)) bits is known); we present a protocol for this setting (without limitations) which communicates \(\mathcal{O}(|\mathcal{Z}|^1\)) bits.
Martin Hirt, Daniel Tschudi

Fair and Efficient Secure Multiparty Computation with Reputation Systems

Abstract
A reputation system for a set of entities is essentially a list of scores that provides a measure of the reliability of each entity in the set. The score given to an entity can be interpreted (and in the reputation system literature it often is [12]) as the probability that an entity will behave honestly. In this paper, we ask whether or not it is possible to utilize reputation systems for carrying out secure multiparty computation. We provide formal definitions of secure computation in this setting, and carry out a theoretical study of feasibility. We present almost tight results showing when it is and is not possible to achieve fair secure computation in our model. We suggest applications for our model in settings where some information about the honesty of other parties is given. This can be preferable to the current situation where either an honest majority is arbitrarily assumed, or a protocol that is secure for a dishonest majority is used and the efficiency and security guarantees (including fairness) of an honest majority are not obtained.
Gilad Asharov, Yehuda Lindell, Hila Zarosim

Between a Rock and a Hard Place: Interpolating between MPC and FHE

Abstract
We present a computationally secure MPC protocol for threshold adversaries which is parametrized by a value L. When L = 2 we obtain a classical form of MPC protocol in which interaction is required for multiplications, as L increases interaction is reduced, in that one requires interaction only after computing a higher degree function. When L approaches infinity one obtains the FHE based protocol of Gentry, which requires no interaction. Thus one can trade communication for computation in a simple way. Our protocol is based on an interactive protocol for “bootstrapping” a somewhat homomorphic encryption (SHE) scheme. The key contribution is that our presented protocol is highly communication efficient enabling us to obtain reduced communication when compared to traditional MPC protocols for relatively small values of L.
Ashish Choudhury, Jake Loftus, Emmanuela Orsini, Arpita Patra, Nigel P. Smart

Cryptographic Primitives

Building Lossy Trapdoor Functions from Lossy Encryption

Abstract
Injective one-way trapdoor functions are one of the most fundamental cryptographic primitives. In this work we show how to derandomize lossy encryption (with long messages) to obtain lossy trapdoor functions, and hence injective one-way trapdoor functions.
Bellare, Halevi, Sahai and Vadhan (CRYPTO ’98) showed that if Enc is an IND-CPA secure cryptosystem, and H is a random oracle, then xEnc(x,H(x)) is an injective trapdoor function. In this work, we show that if Enc is a lossy encryption with messages at least 1-bit longer than randomness, and h is a pairwise independent hash function, then xEnc(x,h(x)) is a lossy trapdoor function, and hence also an injective trapdoor function.
The works of Peikert, Vaikuntanathan and Waters and Hemenway, Libert, Ostrovsky and Vergnaud showed that statistically-hiding 2-round Oblivious Transfer (OT) is equivalent to Lossy Encryption. In their construction, if the sender randomness is shorter than the message in the OT, it will also be shorter than the message in the lossy encryption. This gives an alternate interpretation of our main result. In this language, we show that any 2-message statistically sender-private semi-honest oblivious transfer (OT) for strings longer than the sender randomness implies the existence of injective one-way trapdoor functions. This is in contrast to the black box separation of injective trapdoor functions from many common cryptographic protocols, e.g. IND-CCA encryption.
Brett Hemenway, Rafail Ostrovsky

Pseudorandom Generators from Regular One-Way Functions: New Constructions with Improved Parameters

Abstract
We revisit the problem of basing pseudorandom generators on regular one-way functions, and present the following constructions:
  • For any known-regular one-way function (on n-bit inputs) that is known to be ε-hard to invert, we give a neat (and tighter) proof for the folklore construction of pseudorandom generator of seed length Θ(n) by making a single call to the underlying one-way function.
  • For any unknown-regular one-way function with known ε-hardness, we give a new construction with seed length Θ(n) and O(n/log(1/ε)) calls. Here the number of calls is also optimal by matching the lower bounds of Holenstein and Sinha (FOCS 2012).
Both constructions require the knowledge about ε, but the dependency can be removed while keeping nearly the same parameters. In the latter case, we get a construction of pseudo-random generator from any unknown-regular one-way function using seed length \(\tilde{O}(n)\) and \(\tilde{O}(n/\log{n})\) calls, where \(\tilde{O}\) omits a factor that can be made arbitrarily close to constant (e.g. logloglogn or even less). This improves the randomized iterate approach by Haitner, Harnik and Reingold (CRYPTO 2006) which requires seed length O(n·logn) and O(n/logn) calls.
Yu Yu, Xiangxue Li, Jian Weng

Constrained Pseudorandom Functions and Their Applications

Abstract
We put forward a new notion of pseudorandom functions (PRFs) we call PRFs. In a standard PRF there is a master key k that enables one to evaluate the function at all points in the domain of the function. In a PRF it is possible to derive constrained keys k S from the master key k. A constrained key k S enables the evaluation of the PRF at a certain subset S of the domain and nowhere else. We present a formal framework for this concept and show that PRFs can be used to construct powerful primitives such as identity-based key exchange and a broadcast encryption system with optimal ciphertext size. We then construct PRFs for several natural set systems needed for these applications. We conclude with several open problems relating to this new concept.
Dan Boneh, Brent Waters

Fully Homomorphic Message Authenticators

Abstract
We define and construct a new primitive called a fully homomorphic message authenticator. With such scheme, anybody can perform arbitrary computations over authenticated data and produce a short tag that authenticates the result of the computation (without knowing the secret key). This tag can be verified using the secret key to ensure that the claimed result is indeed the correct output of the specified computation over previously authenticated data (without knowing the underlying data). For example, Alice can upload authenticated data to “the cloud”, which then performs some specified computations over this data and sends the output to Bob, along with a short tag that convinces Bob of correctness. Alice and Bob only share a secret key, and Bob never needs to know Alice’s underlying data. Our construction relies on fully homomorphic encryption to build fully homomorphic message authenticators.
Rosario Gennaro, Daniel Wichs

Analysis, Cryptanalysis and Passwords

Non-uniform Cracks in the Concrete: The Power of Free Precomputation

Abstract
AES-128, the NIST P-256 elliptic curve, DSA-3072, RSA-3072, and various higher-level protocols are frequently conjectured to provide a security level of 2128. Extensive cryptanalysis of these primitives appears to have stabilized sufficiently to support such conjectures.
In the literature on provable concrete security it is standard to define 2 b security as the nonexistence of high-probability attack algorithms taking time ≤ 2 b . However, this paper provides overwhelming evidence for the existence of high-probability attack algorithms against AES-128, NIST P-256, DSA-3072, and RSA-3072 taking time considerably below 2128, contradicting the standard security conjectures.
These attack algorithms are not realistic; do not indicate any actual security problem; do not indicate any risk to cryptographic users; and do not indicate any failure in previous cryptanalysis. Any actual use of these attack algorithms would be much more expensive than the conventional 2128 attack algorithms. However, this expense is not visible to the standard definitions of security. Consequently the standard definitions of security fail to accurately model actual security.
The underlying problem is that the standard set of algorithms, namely the set of algorithms taking time ≤ 2 b , fails to accurately model the set of algorithms that an attacker can carry out. This paper analyzes this failure in detail, and analyzes several ideas for fixing the security definitions.
Daniel J. Bernstein, Tanja Lange

Factoring RSA Keys from Certified Smart Cards: Coppersmith in the Wild

Abstract
This paper explains how an attacker can efficiently factor 184 distinct RSA keys out of more than two million 1024-bit RSA keys downloaded from Taiwan’s national “Citizen Digital Certificate” database. These keys were generated by government-issued smart cards that have built-in hardware random-number generators and that are advertised as having passed FIPS 140-2 Level 2 certification.
These 184 keys include 103 keys that share primes and that are efficiently factored by a batch-GCD computation. This is the same type of computation that was used last year by two independent teams (USENIX Security 2012: Heninger, Durumeric, Wustrow, Halderman; Crypto 2012: Lenstra, Hughes, Augier, Bos, Kleinjung, Wachter) to factor tens of thousands of cryptographic keys on the Internet.
The remaining 81 keys do not share primes. Factoring these 81 keys requires taking deeper advantage of randomness-generation failures: first using the shared primes as a springboard to characterize the failures, and then using Coppersmith-type partial-key-recovery attacks. This is the first successful public application of Coppersmith-type attacks to keys found in the wild.
Daniel J. Bernstein, Yun-An Chang, Chen-Mou Cheng, Li-Ping Chou, Nadia Heninger, Tanja Lange, Nicko van Someren

Naturally Rehearsing Passwords

Abstract
We introduce quantitative usability and security models to guide the design of password management schemes — systematic strategies to help users create and remember multiple passwords. In the same way that security proofs in cryptography are based on complexity-theoretic assumptions (e.g., hardness of factoring and discrete logarithm), we quantify usability by introducing usability assumptions. In particular, password management relies on assumptions about human memory, e.g., that a user who follows a particular rehearsal schedule will successfully maintain the corresponding memory. These assumptions are informed by research in cognitive science and can be tested empirically. Given rehearsal requirements and a user’s visitation schedule for each account, we use the total number of extra rehearsals that the user would have to do to remember all of his passwords as a measure of the usability of the password scheme. Our usability model leads us to a key observation: password reuse benefits users not only by reducing the number of passwords that the user has to memorize, but more importantly by increasing the natural rehearsal rate for each password. We also present a security model which accounts for the complexity of password management with multiple accounts and associated threats, including online, offline, and plaintext password leak attacks. Observing that current password management schemes are either insecure or unusable, we present Shared Cues — a new scheme in which the underlying secret is strategically shared across accounts to ensure that most rehearsal requirements are satisfied naturally while simultaneously providing strong security. The construction uses the Chinese Remainder Theorem to achieve these competing goals.
Jeremiah Blocki, Manuel Blum, Anupam Datta

Leakage-Resilient Cryptography

Leakage-Resilient Chosen-Ciphertext Secure Public-Key Encryption from Hash Proof System and One-Time Lossy Filter

Abstract
We present a new generic construction of a public-key encryption (PKE) scheme secure against leakage-resilient chosen-ciphertext attacks (LR-CCA), from any Hash Proof System (HPS) and any one-time lossy filter (OT-LF). Efficient constructions of HPSs and OT-LFs from the DDH and DCR assumptions suggest that our construction is a practical approach to LR-CCA security. Most of practical PKEs with LR-CCA security, like variants of Cramer-Shoup scheme, rooted from Hash Proof Systems, but with leakage rates at most 1/4 − o(1) (defined as the ratio of leakage amount to secret-key size). The instantiations of our construction from the DDH and DCR assumptions result in LR-CCA secure PKEs with leakage rate of 1/2 − o(1). On the other hand, our construction also creates a new approach for constructing IND-CCA secure (leakage-free) PKE schemes, which may be of independent interest.
Baodong Qin, Shengli Liu

On Continual Leakage of Discrete Log Representations

Abstract
Let \(\mathbb{G}\) be a group of prime order q, and let g 1,…,g n be random elements of \(\mathbb{G}\). We say that a vector x = \((x_1,\ldots,x_n)\in \mathbb{Z}_q^n\) is a discrete log representation of some some element \(y\in\mathbb{G}\) (with respect to g 1,…,g n ) if \(g_1^{x_1}\cdots g_n^{x_n} = y\). Any element y has many discrete log representations, forming an affine subspace of \(\mathbb{Z}_q^n\). We show that these representations have a nice continuous leakage-resilience property as follows. Assume some attacker \(\mathcal{A}(g_1,\ldots,g_n,y)\) can repeatedly learn L bits of information on arbitrarily many random representations of y. That is, \(\mathcal{A}\) adaptively chooses polynomially many leakage functions \(f_i:\mathbb{Z}_q^n\rightarrow \{0,1\}^L\), and learns the value f i (x i ), where x i is a fresh and random discrete log representation of y. \(\mathcal{A}\) wins the game if it eventually outputs a valid discrete log representation x* of y. We show that if the discrete log assumption holds in \(\mathbb{G}\), then no polynomially bounded \(\mathcal{A}\) can win this game with non-negligible probability, as long as the leakage on each representation is bounded by \(L\approx (n-2)\log q = (1-\frac{2}{n})\cdot\) |x|.
As direct extensions of this property, we design very simple continuous leakage-resilient (CLR) one-way function (OWF) and public-key encryption (PKE) schemes in the so called “invisible key update” model introduced by Alwen et al. at CRYPTO’09. Our CLR-OWF is based on the standard Discrete Log assumption and our CLR-PKE is based on the standard Decisional Diffie-Hellman assumption. Prior to our work, such schemes could only be constructed in groups with a bilinear pairing.
As another surprising application, we show how to design the first leakage-resilient traitor tracing scheme, where no attacker, getting the secret keys of a small subset of decoders (called “traitors”) and bounded leakage on the secret keys of all other decoders, can create a valid decryption key which will not be traced back to at least one of the traitors.
Shweta Agrawal, Yevgeniy Dodis, Vinod Vaikuntanathan, Daniel Wichs

Two-Party Computation

Hiding the Input-Size in Secure Two-Party Computation

Abstract
In the setting of secure multiparty computation, a set of parties wish to compute a joint function of their inputs, while preserving properties like privacy, correctness, and independence of inputs. One security property that has typically not been considered in the past relates to the length or size of the parties inputs. This is despite the fact that in many cases the size of a party’s input can be confidential. The reason for this omission seems to have been the folklore belief that, as with encryption, it is impossible to carry out non-trivial secure computation while hiding the size of parties’ inputs. However some recent results (e.g., Ishai and Paskin at TCC 2007, Ateniese, De Cristofaro and Tsudik at PKC 2011) showed that it is possible to hide the input size of one of the parties for some limited class of functions, including secure two-party set intersection. This suggests that the folklore belief may not be fully accurate.
In this work, we initiate a theoretical study of input-size hiding secure computation, and focus on the two-party case. We present definitions for this task, and deal with the subtleties that arise in the setting where there is no a priori polynomial bound on the parties’ input sizes. Our definitional study yields a multitude of classes of input-size hiding computation, depending on whether a single party’s input size remains hidden or both parties’ input sizes remain hidden, and depending on who receives output and if the output size is hidden from a party in the case that it does not receive output. We prove feasibility and impossibility results for input-size hiding secure two-party computation. Some of the highlights are as follows:
  • Under the assumption that fully homomorphic encryption (FHE) exists, there exist non-trivial functions (e.g., the millionaire’s problem) that can be securely computed while hiding the input size of both parties.
  • Under the assumption that FHE exists, every function can be securely computed while hiding the input size of one party, when both parties receive output (or when the party not receiving output does learn the size of the output). In the case of functions with fixed output length, this implies that every function can be securely computed while hiding one party’s input size.
  • There exist functions that cannot be securely computed while hiding both parties’ input sizes. This is the first formal proof that, in general, some information about the size of the parties’ inputs must be revealed.
Our results are in the semi-honest model. The problem of input-size hiding is already challenging in this scenario. We discuss the additional difficulties that arise in the malicious setting and leave this extension for future work.
Yehuda Lindell, Kobbi Nissim, Claudio Orlandi

Secure Two-Party Computation with Reusable Bit-Commitments, via a Cut-and-Choose with Forge-and-Lose Technique

(Extended Abstract)
Abstract
A secure two-party computation (S2PC) protocol allows two parties to compute over their combined private inputs, as if intermediated by a trusted third party. In the malicious model, this can be achieved with a cut-and-choose of garbled circuits (C&C-GCs), where some GCs are verified for correctness and the remaining are evaluated to determine the circuit output. This paper presents a new C&C-GCs-based S2PC protocol, with significant advantages in efficiency and applicability. First, in contrast with prior protocols that require a majority of evaluated GCs to be correct, the new protocol only requires that at least one evaluated GC is correct. In practice this reduces the total number of GCs to approximately one third, for the same statistical security goal. This is accomplished by augmenting the C&C with a new forge-and-lose technique based on bit commitments with trapdoor. Second, the output of the new protocol includes reusable XOR-homomorphic bit commitments of all circuit input and output bits, thereby enabling efficient linkage of several S2PCs in a reactive manner. The protocol has additional interesting characteristics (which may allow new comparison tradeoffs), such as needing a low number of exponentiations, using a 2-out-of-1 type of oblivious transfer, and using the C&C structure to statistically verify the consistency of input wire keys.
Luís T. A. N. Brandão

Hash Functions

A Heuristic for Finding Compatible Differential Paths with Application to HAS-160

Abstract
The question of compatibility of differential paths plays a central role in second order collision attacks on hash functions. In this context, attacks typically proceed by starting from the middle and constructing the middle-steps quartet in which the two paths are enforced on the respective faces of the quartet structure. Finding paths that can fit in such a quartet structure has been a major challenge and the currently known compatible paths extend over a suboptimal number of steps for hash functions such as SHA-2 and HAS-160. In this paper, we investigate a heuristic that searches for compatible differential paths. The application of the heuristic in case of HAS-160 yields a practical second order collision over all of the function steps, which is the first practical result that covers all of the HAS-160 steps. An example of a colliding quartet is provided.
Aleksandar Kircanski, Riham AlTawy, Amr M. Youssef

Improved Cryptanalysis of Reduced RIPEMD-160

Abstract
In this article, we propose an improved cryptanalysis of the double-branch hash function standard RIPEMD-160. Using a carefully designed non-linear path search tool, we study the potential differential paths that can be constructed from a difference in a single message word and show that some of these message words can lead to very good differential path candidates. Leveraging the recent freedom degree utilization technique from Landelle and Peyrin to merge two branch instances, we eventually manage to obtain a semi-free-start collision attack for 42 steps of the RIPEMD-160 compression function, while the previously best know result reached 36 steps. In addition, we also describe a 36-step semi-free-start collision attack which starts from the first step.
Florian Mendel, Thomas Peyrin, Martin Schläffer, Lei Wang, Shuang Wu

Limited-Birthday Distinguishers for Hash Functions

Collisions beyond the Birthday Bound Can Be Meaningful
Abstract
In this article, we investigate the use of limited-birthday distinguishers to the context of hash functions. We first provide a proper understanding of the limited-birthday problem and demonstrate its soundness by using a new security notion Differential Target Collision Resistance (dTCR) that is related to the classical Target Collision Resistance (TCR) notion. We then solve an open problem and close the existing security gap by proving that the best known generic attack proposed at FSE 2010 for the limited-birthday problem is indeed the best possible method.
Moreover, we show that almost all known collision attacks are in fact more than just a collision finding algorithm, since the difference mask for the message input is usually fixed. A direct and surprising corollary is that these collision attacks are interesting for cryptanalysis even when their complexity goes beyond the 2 n/2 birthday bound and up to the 2 n preimage bound, and can be used to derive distinguishers using the limited-birthday problem. Interestingly, cryptanalysts can now search for collision attacks beyond the 2 n/2 birthday bound.
Finally, we describe a generic algorithm that turns a semi-free-start collision attack on a compression function (even if its complexity is beyond the birthday bound) into a distinguisher on the whole hash function when its internal state is not too wide. To the best of our knowledge, this is the first result that exploits classical semi-free-start collisions on the compression function to exhibit a weakness on the whole hash function. As an application of our findings, we provide distinguishers on reduced or full version of several hash functions, such as RIPEMD-128, SHA-256, Whirlpool, etc.
Mitsugu Iwamoto, Thomas Peyrin, Yu Sasaki

On Diamond Structures and Trojan Message Attacks

Abstract
The first part of this paper considers the diamond structures which were first introduced and applied in the herding attack by Kelsey and Kohno [7]. We present a new method for the construction of a diamond structure with 2 d chaining values the message complexity of which is \(\mathrm{O}(2^{\frac{n+d}{2}})\). Here n is the length of the compression function used. The aforementioned complexity was (with intuitive reasoning) suggested to be true in [7] and later disputed by Blackburn et al. in [3]. In the second part of our paper we give new, efficient variants for the two types of Trojan message attacks against Merkle-Damgård hash functions presented by Andreeva et al. [1] The message complexities of the Collision Trojan Attack and the stronger Herding Trojan Attack in [1] are \(\mathrm{O}(2^{\frac{n}{2}+r})\) and \(\mathrm{O}(2^{\frac{2n}{3}}+2^{\frac{n}{2}+r})\), respectively. Our variants of the above two attack types are the Weak Trojan Attack and the Strong Trojan Attack having the complexities \(\mathrm{O}(2^{\frac{n+r}{2}})\) and \(\mathrm{O}(2^{\frac{2n-s}{3}}+2^{\frac{n+r}{2}})\), respectively. Here 2 r is the cardinality of the prefix set and 2 s is the length of the Trojan message in the Strong Trojan Attack.
Tuomas Kortelainen, Juha Kortelainen

Backmatter

Weitere Informationen

Premium Partner