Skip to main content

2015 | Buch

Information Theoretic Security

8th International Conference, ICITS 2015, Lugano, Switzerland, May 2-5, 2015. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed proceedings of the 8th International Conference on Information Theoretic Security, ICITS 2015, held in Lugano, Switzerland, in May 2015.

The 17 full papers presented in this volume were carefully reviewed and selected from 57 submissions. The papers cover a variety of topics at the intersection of cryptography, information theory, and quantum physics.

Inhaltsverzeichnis

Frontmatter
Practical Sharing of Quantum Secrets over Untrusted Channels
Abstract
In this work we address the issue of sharing a quantum secret over untrusted channels between the dealer and players. Existing solutions require entanglement over a number of systems which scales with the security parameter, quickly becoming impractical. We present protocols (interactive and a non-interactive) where single copy encodings are sufficient. Our protocols work for all quantum secret sharing schemes and access structures, and are implementable with current experimental set ups. For a single authorised player, our protocols act as quantum authentication protocols.
Damian Markham, Anne Marin
Generalizing Efficient Multiparty Computation
Abstract
We focus on generalizing constructions of Batch Single- Choice Cut-And-Choose Oblivious Transfer and Multi-sender k-out-of-n Oblivious Transfer, which are at the core of efficient secure computation constructions proposed by Lindell et al. and the IPS compiler. Our approach consists in showing that such primitives can be based on a much weaker and simpler primitive called Verifiable Oblivious Transfer (VOT) with low overhead. As an intermediate step we construct Generalized Oblivious Transfer from VOT. Finally, we show that Verifiable Oblivious Transfer can be obtained from a structure preserving oblivious transfer protocol (SPOT) through an efficient transformation that uses Groth-Sahai proofs and structure preserving commitments.
Bernardo M. David, Ryo Nishimaki, Samuel Ranellucci, Alain Tapp
Round-Optimal Perfectly Secret Message Transmission with Linear Communication Complexity
Abstract
Consider an arbitrary network of n nodes, up to any t of which are eavesdropped on by an adversary. A sender S wishes to send a message m to a receiver R such that the adversary learns nothing about m (unless it eavesdrops on one among {S,R}). We prove a necessary and sufficient condition on the (synchronous) network for the existence of r-round protocols for perfect communication, for any given r > 0. Our results/protocols are easily adapted to asynchronous networks too and are shown to be optimal in asynchronous “rounds”. Further, we show that round-optimality is achieved without trading-off the communication complexity; specifically, our protocols have an overall message complexity of O(n) elements of a finite field to perfectly transmit one field element. Interestingly, optimality (of protocols) also implies: (a) when the shortest path between S and R has Ω(n) nodes, perfect secrecy is achieved for “free”, because any (insecure routing) protocol would also take O(n) rounds and send O(n) messages (one message along each edge in the shortest path) for transmission and (b) it is well-known that (t + 1) vertex disjoint paths from S to R are necessary for a protocol to exist; a consequent folklore is that the length of the (t + 1) th ranked (disjoint shortest) path would dictate the round complexity of protocols; we show that the folklore is false; round-optimal protocols can be substantially faster than the aforementioned length.
Ravi Kishore, Ashutosh Kumar, Chiranjeevi Vanarasa, Srinathan Kannan
On Zero-Knowledge with Strict Polynomial-Time Simulation and Extraction from Differing-Input Obfuscation for Circuits
Abstract
This paper investigates the exact round complexity of zero-knowledge arguments of knowledge (ZKAOK) with strict-polynomial-time simulation and extraction. Previously, Barak and Lindell [STOC 02] presented a constant-round such ZKAOK. With the parallel technique by Ostrovsky and Visconti [ECCC 12] for implementing Barak’s zero-knowledge [FOCS 01] in 6 rounds, the Barak-Lindell ZKAOK can be implemented in, we believe, 7 rounds, which achieves the best exact round complexity for such ZKAOK from reasonable assumptions.
Recently, Pandey et al. [ePrint 13] proposed a 4-round (concurrent) ZK with strict-polynomial-time simulation based on differing-input obfuscation for machines. Based on their construction, Ding [ISC 14] presented a 4-round ZKAOK with strict-polynomial-time simulation and extraction. However, the known construction of differing-input obfuscation for machines uses knowledge assumptions which are too strong. So an interesting question is whether we can reduce the round complexity of such ZKAOK without using differing-input obfuscation for machines.
In this paper we show that based on differing-input obfuscation for some circuit samplers and other reasonable assumptions, there exists a 6-round ZKAOK for NP with strict-polynomial-time simulation and extraction. Importantly, the assumption of differing-input obfuscation for circuits does not use any knowledge assumption and thus is mild. Moreover, we note that the auxiliary inputs output by the circuit samplers in our construction are public coins and perfectly-hiding commitments, which is quite natural. So this assumption, in our view, could be considered reasonable.
Ning Ding
Unifying Leakage Classes: Simulatable Leakage and Pseudoentropy
Abstract
Leakage resilient cryptography designs systems to withstand partial adversary knowledge of secret state. Ideally, leakage-resilient systems withstand current and future attacks; restoring confidence in the security of implemented cryptographic systems. Understanding the relation between classes of leakage functions is an important aspect.
In this work, we consider the memory leakage model, where the leakage class contains functions over the system’s entire secret state. Standard limitations include functions with bounded output length, functions that retain (pseudo) entropy in the secret, and functions that leave the secret computationally unpredictable.
Standaert, Pereira, and Yu (Crypto, 2013) introduced a new class of leakage functions they call simulatable leakage. A leakage function is simulatable if a simulator can produce indistinguishable leakage without access to the true secret state. We extend their notion to general applications and consider two versions. For weak simulatability: the simulated leakage must be indistinguishable from the true leakage in the presence of public information. For strong simulatability, this requirement must also hold when the distinguisher has access to the true secret state. We show the following:
  • Weakly simulatable functions retain computational unpredictability.
  • Strongly simulatability functions retain pseudoentropy.
  • There are bounded length functions that are not weakly simulatable.
  • There are weakly simulatable functions that remove pseudoentropy.
  • There are leakage functions that retain computational unpredictability are not weakly simulatable.
Benjamin Fuller, Ariel Hamlin
On the Orthogonal Vector Problem and the Feasibility of Unconditionally Secure Leakage-Resilient Computation
Abstract
We consider unconditionally secure leakage resilient two- party computation. Security means that the leakage obtained by an adversary can be simulated using a similar amount of leakage from the private inputs or outputs. A related problem is known as circuit compilation, where there is only one device doing a computation on public input and output. Here the goal is to ensure that the adversary learns only the input/output behaviour of the computation, even given leakage from the internal state of the device. We study these problems in an enhanced version of the “only computation leaks” model, where the adversary is additionally allowed a bounded amount of global leakage from the state of the entity under attack. In this model, we show the first unconditionally secure leakage resilient two-party computation protocol. The protocol assumes access to correlated randomness in the form of a functionality \(f_{\sc Ort}\) that outputs pairs of orthogonal vectors (u,v) over some finite field, where the adversary can leak independently from u and from v. We also construct a general circuit compiler secure in the same leakage model. Our constructions work, even if the adversary is allowed to corrupt a constant fraction of the calls to \(f_{\sc Ort}\) and decide which vectors should be output. On the negative side, we show that unconditionally secure two-party computation and circuit compilation are in general impossible in the plain version of our model. It follows that even a somewhat unreliable version of \(f_{\sc Ort}\) cannot be implemented with unconditional security in the plain leakage model, using classical communication. However, we show that an implementation using quantum communication does exist. In particular, we propose a simple “prepare-and-measure” type protocol which we show secure using a new result on sampling from a quantum population. Although the protocol may produce a small number of incorrect pairs, this is sufficient for leakage resilient computation by our other results. To the best of our knowledge, this is the first time a quantum protocol is used constructively for leakage resilience.
Note that the full version of this paper is available at [6].
Ivan Damgård, Frédéric Dupuis, Jesper Buus Nielsen
Metric Pseudoentropy: Characterizations, Transformations and Applications
Abstract
Metric entropy is a computational variant of entropy, often used as a convenient substitute of HILL Entropy which is the standard notion of entropy in many cryptographic applications, like leakage-resilient cryptography, deterministic encryption or memory delegation. In this paper we develop a general method to characterize metric-type computational variants of entropy, in a way depending only on properties of a chosen class of test functions (adversaries). As a consequence, we obtain a nice and elegant geometric interpretation of metric entropy. We apply these characterizations to simplify and modularize proofs of some important results, in particular: (a) computational dense model theorem (FOCS’08), (b) a variant of the Leftover Hash Lemma with improvements for square-friendly applications (CRYPTO’11) and (c) equivalence between unpredictability entropy and HILL entropy over small domains (STOC’12). We also give a new tight transformation between HILL and metric pseudoentropy, which implies the dense model theorem with best possible parameters.
Maciej Skorski
Nonuniform Indistinguishability and Unpredictability Hardcore Lemmas: New Proofs and Applications to Pseudoentropy
Abstract
Hardcore lemmas are results in complexity theory which state that average-case hardness must have a very hard “kernel”, that is a subset of instances where the given problem is extremely hard. They find important applications in hardness amplification. In this paper we revisit the following two fundamental results:
1
The hardcore lemma for unpredictability, due to Impagliazzo (FOCS ’95). It states that if a boolean function f is “moderately” hard to predict on average, then there must be a set of noticeable size on which f is “extremely” hard to predict.
 
2
The hardcore lemma for indistinguishability, proved by Maurer and Tesaro (TCC’10), states that for two random variables X and Y which are ε-computationally close, there are events A and B of probability 1 − ε such that the distributions of X|A and Y|B are “computationally” identical.
 
Using only the standard min-max theorem and some basic facts about convex approximations in L p spaces, we provide alternative modular proofs and some generalizations of these results in the nonuniform setting, achieving best possible bounds for (a) and slightly improving the known bounds for (b). As an interesting application, we show a strengthening of the transformation between two most popular pseudoentropy variants: HILL and Metric Entropy, and apply it to show how to extract pseudorandomness from a sequence of metric-entropy sources of poor quality. In this case we significantly improve security parameters, comparing to the best known techniques.
Maciej Skorski
Gambling, Computational Information and Encryption Security
Abstract
We revisit the question, originally posed by Yao (1982), of whether encryption security may be characterized using computational information. Yao provided an affirmative answer, using a compression-based notion of computational information to give a characterization equivalent to the standard computational notion of semantic security. We give two other equivalent characterizations. The first uses a computational formulation of Kelly’s (1957) model for “gambling with inside information”, leading to an encryption notion which is similar to Yao’s but where encrypted data is used by an adversary to place bets maximizing the rate of growth of total wealth over a sequence of independent, identically distributed events. The difficulty of this gambling task is closely related to Vadhan and Zheng’s (2011) notion of KL-hardness, which in certain cases is equivalent to a conditional form of the pseudoentropy introduced by Hastad et. al. (1999). Using techniques introduced to prove this equivalence, we are also able to give a characterization of encryption security in terms of conditional pseudoentropy. Finally, we will reconsider the gambling model with respect to “risk-neutral” adversaries in an attempt to understand whether assumptions about the rationality of adversaries may impact the level of security achieved by an encryption scheme.
Mohammad Hajiabadi, Bruce M. Kapron
Query-Complexity Amplification for Random Oracles
Abstract
Increasing the computational complexity of evaluating a hash function, both for the honest users as well as for an adversary, is a useful technique employed for example in password-based cryptographic schemes to impede brute-force attacks, and also in so-called proofs of work (used in protocols like Bitcoin) to show that a certain amount of computation was performed by a legitimate user. A natural approach to adjust the complexity of a hash function is to iterate it c times, for some parameter c, in the hope that any query to the scheme requires c evaluations of the underlying hash function. However, results by Dodis et al. (Crypto 2012) imply that plain iteration falls short of achieving this goal, and designing schemes which provably have such a desirable property remained an open problem.
This paper formalizes explicitly what it means for a given scheme to amplify the query complexity of a hash function. In the random oracle model, the goal of a secure query-complexity amplifier (QCA) scheme is captured as transforming, in the sense of indifferentiability, a random oracle allowing R queries (for the adversary) into one provably allowing only r < R queries. Turned around, this means that making r queries to the scheme requires at least R queries to the actual random oracle. Second, a new scheme, called collision-free iteration, is proposed and proven to achieve c-fold QCA for both the honest parties and the adversary, for any fixed parameter c.
Grégory Demay, Peter Gaži, Ueli Maurer, Björn Tackmann
The Chaining Lemma and Its Application
Abstract
We present a new information-theoretic result which we call the Chaining Lemma. It considers a so-called “chain” of random variables, defined by a source distribution X (0) with high min-entropy and a number (say, t in total) of arbitrary functions (T 1,...,T t ) which are applied in succession to that source to generate the chain X (0) \(\underrightarrow{T_1}\) X (1) \(\underrightarrow{T_2}\) X (2)... \(\underrightarrow{T_t}\) X (t) . Intuitively, the Chaining Lemma guarantees that, if the chain is not too long, then either (i) the entire chain is “highly random”, in that every variable has high min-entropy; or (ii) it is possible to find a point j (1 ≤ j ≤ t) in the chain such that, conditioned on the end of the chain i.e. X (j) \(\underrightarrow{T_{j+1}}\) X (j + 1) ... \(\underrightarrow{T_t}\) X (t), the preceding part X (0) \(\underrightarrow{T_{1}}\) X (1) ... \(\underrightarrow{T_j}\) X (j) remains highly random. We think this is an interesting information-theoretic result which is intuitive but nevertheless requires rigorous case-analysis to prove.
We believe that the above lemma will find applications in cryptography. We give an example of this, namely we show an application of the lemma to protect essentially any cryptographic scheme against memory-tampering attacks. We allow several tampering requests, the tampering functions can be arbitrary, however, they must be chosen from a bounded size set of functions that is fixed a priori.
Ivan Damgård, Sebastian Faust, Pratyay Mukherjee, Daniele Venturi
Weakening the Isolation Assumption of Tamper-Proof Hardware Tokens
Abstract
Recent results have shown the usefulness of tamper-proof hardware tokens as a setup assumption for building UC-secure two-party computation protocols, thus providing broad security guarantees and allowing the use of such protocols as buildings blocks in the modular design of complex cryptography protocols. All these works have in common that they assume the tokens to be completely isolated from their creator, but this is a strong assumption. In this work we investigate the feasibility of cryptographic protocols in the setting where the isolation of the hardware token is weakened.
We consider two cases: (1) the token can relay messages to its creator, or (2) the creator can send messages to the token after it is sent to the receiver. We provide a detailed characterization for both settings, presenting both impossibilities and information-theoretically secure solutions.
Rafael Dowsley, Jörn Müuller-Quade, Tobias Nilges
Limited View Adversary Codes: Bounds, Constructions and Applications
Abstract
Limited View Adversary Codes (LV codes) provide protection against an adversary who has partial view of the communication channel and can use this view to corrupt the sent codeword by constructing an adversarial error vector that will be added to the codeword. For a codeword of length N, the adversary sees a subset of ρ r N of the codeword components and adds an adversarial error vector of weight ρ w N to the codeword. A δ-LV code ensures correct recovery of the sent message with probability at least 1 − δ. The motivation for studying these codes is modelling adversarial corruptions in wireless communications as well as networks that are partially controlled by an adversary, with the aim of providing reliable communication.
This paper makes the following contributions. First we prove an upper bound on the rate of LV codes and extend it to a bound on the rate of a code family. Second, we give an explicit construction of an LV code family that achieves the bound (over large alphabets) for certain range of ρ r , and hence obtain the capacity of LV adversarial channels for that range of ρ r . The construction has efficient encoding and decoding. Finally we show the relationship between LV codes and a cryptographic primitive known as Reliable Message Transmission (RMT), and use this relation to obtain a new bound on the transmission rate of 1-round δ-RMT protocols, and construct an optimal 1-round RMT protocol family. We discuss our results, give their relations to other works and primitives including list decodable codes, and suggest directions for future research.
Pengwei Wang, Reihaneh Safavi-Naini
Locally Decodable Codes for Edit Distance
Abstract
Locally decodable codes (LDC) [1,9] are error correcting codes that allow decoding (any) individual symbol of the message, by reading only few symbols of the codeword. LDC’s, originally considered in the setting of PCP’s [1], have found other additional applications in theory of CS, such as PIR in cryptography, generating a lot of fascinating work (see [12] and references within). In one straightforward practical application to storage, such codes provide enormous efficiency gains over standard error correcting codes (ECCs), that need to read the entire encoded message to learn even a single bit of the encoded message. Typically, LDC’s, as well as standard ECC’s are designed to decode the encoded message if up to some bounded fraction of the symbols had been modified. This corresponds to decoding strings of bounded Hamming distance from a valid codeword. A stronger natural metric is the edit distance, measuring the shortest sequence of insertions and deletions (indel.) of symbols leading from one word to another. Standard ECC’s for edit distance have been previously considered [11]. Furthermore, [11] devised codes with rate and distance (error tolerance) optimal up to constants, with efficient encoding and decoding procedures. However, combining these two useful settings of LDC, and robustness against indel. errors has never been considered.
In this work, we study the question of constructing LDC’s for edit distance. We demonstrate a strong positive result - LDC’s for edit distance can be achieved, with similar parameters to LDC’s for Hamming distance. More precisely, we devise a generic transformation from LDC for Hamming distance to LDC for edit distance with related parameters. Besides the theoretical appeal of such a primitive, we put forward a cryptographic application to secure property testing over storage prone to indel. errors (such as DNA-based storage).
Rafail Ostrovsky, Anat Paskin-Cherniavsky
The Multivariate Hidden Number Problem
Abstract
This work extends the line of research on the hidden number problem. Motivated by studying bit security in finite fields, we define the multivariate hidden number problem. Here, the secret and the multiplier are vectors, and partial information about their dot product is given. Using tools from discrete Fourier analysis introduced by Akavia, Goldwasser and Safra, we show that if one can find the significant Fourier coefficients of some function, then one can solve the multivariate hidden number problem for that function. This allows us to generalise the work of Akavia on the hidden number problem with (non-adaptive) chosen multipliers to all finite fields.
We give two further applications of our results, both of which generalise previous works to all (finite) extension fields. The first considers the general (random samples) hidden number problem in \(\mathbb{F}_{p^m}\) and assumes an advice is given to the algorithm. The second considers a model that allows changing representations, where we show hardness of individual bits for elliptic curve and pairing based functions for elliptic curves over extension fields, as well as hardness of any bit of any component of the Diffie-Hellman secret in \(\mathbb{F}_{p^m} (m>1)\).
Steven D. Galbraith, Barak Shani
Lattice Point Enumeration on Block Reduced Bases
Abstract
When analyzing lattice-based cryptosystems, we often need to solve the Shortest Vector Problem (SVP) in some lattice associated to the system under scrutiny. The go-to algorithms in practice to solve SVP are enumeration algorithms, which usually consist of a preprocessing step, followed by an exhaustive search. Obviously, the two steps offer a trade-off and should be balanced in their running time in order to minimize the overall complexity. In practice, the most common approach to control this trade-off is to use block reduction algorithms during the preprocessing. Despite the popularity of this approach, it lacks any well founded analysis and all practical approaches seem to use ad hoc parameters. This weakens our confidence in the cryptanalysis of the systems. In this work, we aim to shed light on at least one side of this trade-off and analyze the effect of block reduction on the exhaustive search. For this, we give asymptotic worst case bounds and present results from both experiments and simulation that show its average case behavior in practice.
Michael Walter
Adaptive Key Recovery Attacks on NTRU-Based Somewhat Homomorphic Encryption Schemes
Abstract
In this paper we present adaptive key recovery attacks on NTRU-based somewhat homomorphic encryption schemes. Among such schemes, we study the proposal by Bos et al [BLLN13] in 2013. Given access to a decryption oracle, the attack allows us to compute the private key for all parameter choices. Such attacks show that one must be very careful about the use of homomorphic encryption in practice. The existence of a key recovery attack means that the scheme is not CCA1-secure. Indeed, almost every somewhat homomorphic construction proposed till now in the literature is vulnerable to an attack of this type. Hence our result adds to a body of literature that shows that building CCA1-secure homomorphic schemes is not trivial.
Ricardo Dahab, Steven Galbraith, Eduardo Morais
Backmatter
Metadaten
Titel
Information Theoretic Security
herausgegeben von
Anja Lehmann
Stefan Wolf
Copyright-Jahr
2015
Electronic ISBN
978-3-319-17470-9
Print ISBN
978-3-319-17469-3
DOI
https://doi.org/10.1007/978-3-319-17470-9

Premium Partner