Skip to main content
Top

2010 | Book

Security and Cryptography for Networks

7th International Conference, SCN 2010, Amalfi, Italy, September 13-15, 2010. Proceedings

Editors: Juan A. Garay, Roberto De Prisco

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

Table of Contents

Frontmatter

Encryption I

Time-Specific Encryption
Abstract
This paper introduces and explores the new concept of Time-Specific Encryption (TSE). In (Plain) TSE, a Time Server broadcasts a key at the beginning of each time unit, a Time Instant Key (TIK). The sender of a message can specify any time interval during the encryption process; the receiver can decrypt to recover the message only if it has a TIK that corresponds to a time in that interval. We extend Plain TSE to the public-key and identity-based settings, where receivers are additionally equipped with private keys and either public keys or identities, and where decryption now requires the use of the private key as well as an appropriate TIK. We introduce security models for the plain, public-key and identity-based settings. We also provide constructions for schemes in the different settings, showing how to obtain Plain TSE using identity-based techniques, how to combine Plain TSE with public-key and identity-based encryption schemes, and how to build schemes that are chosen-ciphertext secure from schemes that are chosen-plaintext secure. Finally, we suggest applications for our new primitive, and discuss its relationships with existing primitives, such as Timed-Release Encryption and Broadcast Encryption.
Kenneth G. Paterson, Elizabeth A. Quaglia
Public-Key Encryption with Efficient Amortized Updates
Abstract
Searching and modifying public-key encrypted data has received a lot of attention in recent literature. In this paper we re-visit this important topic and achieve improved amortized bounds including resolving a prominent open question posed by Boneh et al. [3].
First, we consider the following much simpler to state problem: A server holds a copy of Alice’s database that has been encrypted under Alice’s public key. Alice would like to allow other users in the system to replace a bit of their choice in the server’s database by communicating directly with the server, despite other users not having Alice’s private key. However, Alice requires that the server should not know which bit was modified. Additionally, she requires that the modification protocol should have “small” communication complexity (sub-linear in the database size). This task is referred to as private database modification, and is a central tool in building a more general protocol for modifying and searching over public-key encrypted data. Boneh et al. [3] first considered the problem and gave a protocol to modify 1 bit of an N-bit database with communication complexity \(\mathcal{O}(\sqrt N)\). Naturally, one can ask if we can improve upon this. Indeed, the recent work of Gentry [9] shows that under lattice assumptions, better asymptotic communication complexity is possible. However, current algebraic techniques based on any singly homomorphic encryption, or bilinear maps (which includes for example, all known cryptosystems based on factoring and discrete logs) cannot achieve communication better than \(\mathcal{O}(\sqrt N)\) (see [17]). In this paper we study the problem of improving the communication complexity for modifying L bits of an N-bit database. Our main result is a black-box construction of a private database modification protocol to modify L bits of an N-bit database, using a protocol for modifying 1 bit. Our protocol has communication complexity \(\tilde{\mathcal{O}}(N^\beta L^{(1+\alpha)(1-\beta)})\), where 0 < α< 1 can be an arbitrary constant and N β , 0 < β< 1 (for constant β) is the communication complexity of a protocol for modifying 1 bit of an N-bit database. We stress that our amortized protocol improves the communication complexity in all cases when the single bit modification protocol uses any known cryptosystem based on factoring or discrete logs.
In addition to our general reduction, we show how to realize an implementation of our amortized protocol under the subgroup decision problem [2]. (We remark that in contrast with recent work of Lipmaa [16] on the same topic, our database size does not grow with every update, and stays exactly the same size.)
As sample corollaries to our main result, we obtain the following:
  • First, we apply our private database modification protocol to answer the main open question of [3]. More specifically, we construct a public-key encryption scheme supporting PIR queries that allows every message to have a non-constant number of keywords associated with it, which is secure under the subgroup decision problem.
  • Second, we show that one can apply our techniques to obtain more efficient communication complexity when parties wish to increment or decrement multiple cryptographic counters (formalized by Katz et al. [15]).
We believe that “public-key encrypted” amortized database modification is an important cryptographic primitive in its own right and will be useful in other applications.
Nishanth Chandran, Rafail Ostrovsky, William E. Skeith III
Generic Constructions of Parallel Key-Insulated Encryption
Abstract
Exposure of a secret key is a significant threat in practice. As a notion of security against key exposure, Dodis et al. advocated key-insulated security, and proposed concrete key-insulated encryption (KIE) schemes in which secret keys are periodically updated by using a physically “insulated” helper key. For significantly reducing possibility of exposure of the helper key, Hanaoka et al. further proposed the notion of parallel KIE (PKIE) in which multiple helper keys are used in alternate shifts. They also pointed out that in contrast to the case of the standard KIE, PKIE cannot be straightforwardly obtained from identity-based encryption (IBE). In this paper, we clarify that PKIE can be generically constructed by using a new primitive which we call one-time forward secure public key encryption (OTFS-PKE) and show that it is possible to construct OTFS-PKE from arbitrary IBE or hierarchical IBE (without degenerating into IBE). By using our method, we can obtain various new PKIE schemes which yield desirable properties. For example, we can construct first PKIE schemes from lattice or quadratic residuosity problems (without using bilinear maps), and PKIE with short ciphertexts and cheaper computational cost for both encryption and decryption.
Goichiro Hanaoka, Jian Weng

Invited Talk

Heuristics and Rigor in Lattice-Based Cryptography
(Invited Talk)
Abstract
Cryptographic schemes based on lattices first emerged in the mid-1990s, and have developed rapidly in the past few years. At the outset, works in this area fell into two very distinct types:
  • Heuristic proposals such as NTRU, which lacked any formal security justification but were very practical;
  • Schemes building on Ajtai’s breakthrough work, which were highly impractical but came with provable ‘worst-case’ security guarantees.
More recently, the line between efficiency and rigorous security has been blurred significantly (though not yet obliterated completely).
This talk will survey several examples of early proposals that lacked any rigorous security analysis — and in some cases, turned out to be completely insecure — but which later inspired theoretically sound and efficient solutions. Even better, these solutions have opened the door to unexpected and far more advanced cryptographic applications than were originally envisioned.
Chris Peikert

Cryptanalysis

Differential Fault Analysis of LEX
Abstract
LEX is a stream cipher based on the round transformation of the AES block cipher, and it was selected for the final phase evaluation of the eSTREAM project. LEX is 2.5 times faster than AES both in software and in hardware. In this paper, we present a differential fault attack on LEX. The fault model assumes that the attacker is able to flip a random bit of the internal state of the cipher but cannot control the exact location of the induced fault. Our attack requires 40 faults, and recovers the secret key with 216 operations.
Jianyong Huang, Willy Susilo, Jennifer Seberry
Generalized RC4 Key Collisions and Hash Collisions
Abstract
In this paper, we discovered that RC4 can generate colliding key pairs with various hamming distances, other than those found by Matsui (with hamming distance one), and by Chen and Miyaji (with hamming distance three). We formalized RC4 colliding key pairs into two large patterns, namely, Transitional pattern and Self-Absorbing pattern, according to the behavior during KSA. The colliding key pairs found in the previous researches can be seen as either subsets of the Transitional pattern or of the Self-Absorbing pattern. We analyzed both patterns and clarified the relations among the probability of key collision, key length and hamming distances which yield the colliding key pairs. Also we show how to make use of the RC4 key collision patterns to find collisions of RC4-Hash function which was proposed in INDOCRYPT 2006. Some concrete experimental results (RC4-Hash collision and RC4 colliding key pairs) are also given in this paper.
Jiageng Chen, Atsuko Miyaji

Hash Functions

On the Indifferentiability of the Grøstl Hash Function
Abstract
The notion of indifferentiability, introduced by Maurer et al., is an important criterion for the security of hash functions. Concretely, it ensures that a hash function has no structural design flaws and thus guarantees security against generic attacks up to the proven bounds. In this work we prove the indifferentiability of Grøstl, a second round SHA-3 hash function candidate. Grøstl combines characteristics of the wide-pipe and chop-Merkle-Damgård iterations and uses two distinct permutations P and Q internally. Under the assumption that P and Q are random l-bit permutations, where l is the iterated state size of Grøstl, we prove that the advantage of a distinguisher to differentiate Grøstl from a random oracle is upper bounded by O((Kq)4/2 l ), where the distinguisher makes at most q queries of length at most K blocks. This result implies that Grøstl behaves like a random oracle up to q = O(2 n/2) queries, where n is the output size. Furthermore, we show that the output transformation ofGrøstl, as well as ‘Grøstail’ (the composition of the final compression function and the output transformation), are clearly differentiable from a random oracle. This rules out indifferentiability proofs which rely on the idealness of the final state transformation.
Elena Andreeva, Bart Mennink, Bart Preneel

Side Channel Attacks and Leakage Resilience

Algorithmic Tamper-Proof Security under Probing Attacks
Abstract
Gennaro et al. initiated the study of algorithmic tamper proof (ATP) cryptography: cryptographic hardware that remains secure even in the presence of an adversary who can tamper with the memory content of a hardware device. In this paper, we solve an open problem stated in their paper, and also consider whether a device can be secured against an adversary who can both tamper with its memory and probe a few memory locations or wires at a time. Our results are as follows:
  • It is impossible to realize a secure cryptographic functionality with a personal identification number (PIN) where a user is allowed to make up to ℓ incorrect consecutive attempts to enter her PIN, with no total limit on incorrect PIN attempts. (This was left as an open problem by Gennaro et al.)
  • It is impossible to secure a deterministic cryptographic device against an adversary who is allowed to both tamper with the memory of the device and probe a memory location; it is also essentially infeasible to secure it if the adversary’s probing power is restricted to internal wires; it is impossible to secure it against an adversary whose probing power is restricted to internal wires, but who is also allowed to tamper with a few internal wires.
  • By extending the results of Ishai et al., we show that a cryptographic device with a true source of randomness can withstand tampering and limited probing attacks at the same time.
Feng-Hao Liu, Anna Lysyanskaya
Leakage-Resilient Storage
Abstract
We study a problem of secure data storage on hardware that may leak information. We introduce a new primitive, that we call leakage-resilient storage (LRS), which is an (unkeyed) scheme for encoding messages, and can be viewed as a generalization of the All-Or-Nothing Transform (AONT, Rivest 1997). The standard definition of AONT requires that it should be hard to reconstruct a message m if not all the bits of its encoding Encode(m) are known. LRS is defined more generally, with respect to a class Γ of functions. The security definition of LRS requires that it should be hard to reconstruct m even if some values g 1(Encode(m)),..., g t (Encode(m)) are known (where g 1,...,g t  ∈ Γ), as long as the total length of g 1(Encode(m)),...,g t (Encode(m)) is smaller than some parameter c.
We construct an LRS scheme that is secure with respect to Γ being a set of functions that can depend only on some restricted part of the memory. More precisely: we assume that the memory is divided in 2 parts, and the functions in Γ can be just applied to one of these parts. We also construct a scheme that is secure if the cardinality of Γ is restricted (but still it can be exponential in the length of the encoding). This construction implies security in the case when the set Γ consists of functions that are computable by Boolean circuits of a small size.
We also discuss the connection between the problem of constructing leakage-resilient storage and a theory of the compressibility of NP-instances.
Francesco Davì, Stefan Dziembowski, Daniele Venturi

Encryption II

Searching Keywords with Wildcards on Encrypted Data
Abstract
A hidden vector encryption scheme (HVE) is a derivation of identity-based encryption, where the public key is actually a vector over a certain alphabet. The decryption key is also derived from such a vector, but this one is also allowed to have “⋆” (or wildcard) entries. Decryption is possible as long as these tuples agree on every position except where a “⋆” occurs.
These schemes are useful for a variety of applications: they can be used as a building block to construct attribute-based encryption schemes and sophisticated predicate encryption schemes (for e.g. range or subset queries). Another interesting application – and our main motivation – is to create searchable encryption schemes that support queries for keywords containing wildcards.
Here we construct a new HVE scheme, based on bilinear groups of prime order, which supports vectors over any alphabet. The resulting ciphertext length is equally shorter than existing schemes, depending on a trade-off. The length of the decryption key and the computational complexity of decryption are both constant, unlike existing schemes where these are both dependent on the amount of non-wildcard symbols associated to the decryption key.
Our construction hides both the plaintext and public key used for encryption. We prove security in a selective model, under the decision linear assumption.
Saeed Sedghi, Peter van Liesdonk, Svetla Nikova, Pieter Hartel, Willem Jonker
Threshold Attribute-Based Signcryption
Abstract
In this paper, we propose a new threshold attribute-based signcryption scheme secure in the standard model. The scheme provides message confidentiality, and authenticity of a message in addition to attesting the attributes of the sender. Such a property is useful in applications such as electronic card, digital prescription carrier devices, secure and authentic email service, etc. Our scheme relies on the intractability of the hashed modified decisional Diffie-Hellman and modified computational Diffie-Hellman assumptions, and is proven secure under adaptive chosen ciphertext attack and chosen message attack security notions of signcryption. Further, we achieve a tight reduction for both the security notions in the standard model.
Martin Gagné, Shivaramakrishnan Narayan, Reihaneh Safavi-Naini

Cryptographic Protocols I

Efficiency-Improved Fully Simulatable Adaptive OT under the DDH Assumption
Abstract
At Asiacrypt 2009, Kurosawa and Nojima showed a fully simulatable adaptive oblivious transfer (OT) protocol under the DDH assumption in the standard model. However, Green and Hohenberger pointed out that the communication cost of each transfer phase is O(n), where n is the number of the sender’s messages. In this paper, we show that the cost can be reduced to O(1) by utilizing a verifiable shuffle protocol.
Kaoru Kurosawa, Ryo Nojima, Le Trieu Phong
Improved Primitives for Secure Multiparty Integer Computation
Abstract
We consider a collection of related multiparty computation protocols that provide core operations for secure integer and fixed-point computation. The higher-level protocols offer integer truncation and comparison, which are typically the main performance bottlenecks in complex applications. We present techniques and building blocks that allow to improve the efficiency of these protocols, in order to meet the performance requirements of a broader range of applications. The protocols can be constructed using different secure computation methods. We focus on solutions for multiparty computation using secret sharing.
Octavian Catrina, Sebastiaan de Hoogh
How to Pair with a Human
Abstract
We introduce a protocol, that we call Human Key Agreement, that allows pairs of humans to establish a key in a (seemingly hopeless) case where no public-key infrastructure is available, the users do not share any common secret, and have never been connected by any physically-secure channel. Our key agreement scheme, while vulnerable to the human-in-the-middle attacks, is secure against any malicious machine-in-the middle. The only assumption that we make is that the attacker is a machine that is not able to break the Captcha puzzles (introduced by von Ahn et al., EUROCRYPT 2003).
Our main tool is a primitive that we call a Simultaneous Turing Test, which is a protocol that allows two users to verify if they are both human, in such a way that if one of them is not a human, then he does not learn whether the other one is human, or not.
To construct this tool we use a Universally-Composable Password Authenticated Key Agreement of Canetti et al. (EUROCRYPT 2005).
Stefan Dziembowski

Authentication and Key Agreement

A New Security Model for Authenticated Key Agreement
Abstract
The Canetti–Krawczyk (CK) and extended Canetti–Krawczyk (eCK) security models, are widely used to provide security arguments for key agreement protocols. We discuss security shades in the (e)CK models, and some practical attacks unconsidered in (e)CK–security arguments. We propose a strong security model which encompasses the eCK one. We also propose a new protocol, called Strengthened MQV (SMQV), which in addition to provide the same efficiency as the (H)MQV protocols, is particularly suited for distributed implementations wherein a tamper–proof device is used to store long–lived keys, while session keys are used on an untrusted host machine. The SMQV protocol meets our security definition under the Gap Diffie–Hellman assumption and the Random Oracle model.
Augustin P. Sarr, Philippe Elbaz-Vincent, Jean-Claude Bajard
A Security Enhancement and Proof for Authentication and Key Agreement (AKA)
Abstract
In this work, we consider Authentication and Key Agreement (AKA), a popular client-server Key Exchange (KE) protocol, commonly used in wireless standards (e.g., UMTS), and widely considered for new applications. We discuss natural potential usage scenarios for AKA, attract attention to subtle vulnerabilities, propose a simple and efficient AKA enhancement, and provide its formal proof of security.
The vulnerabilities arise due to the fact that AKA is not a secure KE in the standard cryptographic sense, since Client \(\mathcal{C}\) does not contribute randomness to the session key. We argue that AKA remains secure in current deployments where \(\mathcal{C}\) is an entity controlled by a single tamper-resistant User Identity Module (UIM). However, we also show that AKA is insecure if several Client’s devices/UIMs share his identity and key.
We show practical applicability and efficiency benefits of such multi-UIM scenarios. As our main contribution, we adapt AKA for this setting, with only the minimal changes, while adhering to AKA design goals, and preserving its advantages and features. Our protocol involves one extra PRFG evaluation and no extra messages. We formally prove security of the resulting protocol. We discuss how our security improvement allows simplification of some of AKA security heuristics, which may make our protocol more efficient and robust than AKA even for the current deployment scenarios.
Vladimir Kolesnikov
Authenticated Key Agreement with Key Re-use in the Short Authenticated Strings Model
Abstract
Serge Vaudenay [20] introduced a notion of Message Authentication (MA) protocols in the Short Authenticated String (SAS) model. A SAS-MA protocol authenticates arbitrarily long messages sent over insecure channels as long as the sender and the receiver can additionally send a very short, e.g. 20 bit, authenticated message to each other. The main practical application of a SAS-MA protocol is Authenticated Key Agreement (AKA) in this communication model, i.e. SAS-AKA, which can be used for so-called “pairing” of wireless devices. Subsequent work [9,12,10] showed three-round SAS-AKA protocols. However, the Diffie-Hellman (DH) based SAS-AKA protocol of [10] requires choosing fresh DH exponents in each protocol instance, while the generic SAS-AKA construction given by [12] applies only to AKA protocols which have no shared state between protocol sessions. Therefore, both prior works exclude the most efficient, although not perfect-forward-secret, AKA protocols that re-use private keys (for encryption-based AKAs) or DH exponents (for DH-based AKAs) across multiple protocol sessions.
In this paper, we propose a novel three-round encryption-based SAS-AKA protocol, using non-malleable commitments and CCA-secure encryption as tools, which we show secure (but without perfect-forward secrecy) if each player re-uses its private/public key across protocol sessions. The cost of this protocol is dominated by a single public key encryption for one party and a decryption for the other, assuming the Random Oracle Model (ROM). When implemented with RSA encryption the new SAS-AKA protocol is especially attractive if the two devices being paired have asymmetric computational power (e.g., a desktop and a keyboard).
Stanisław Jarecki, Nitesh Saxena

Cryptographic Primitives and Schemes

Kleptography from Standard Assumptions and Applications
Abstract
Kleptography deals with employing and generating cryptographically secure covert channels as threats to unscrutinized (e.g., tamper-proof) cryptosystems and their applications. A prototypical example is a cryptosystem (or a protocol message employing a cryptosystem) where a cryptogram field (e.g., a public key, an encrypted message, a signature value) hosts an “inner cryptographic field” that is invisible (in the sense of indistinguishability) to all but the attacker, yet it is a meaningful ciphertext to the attacker (who is the designer/producer of the cryptosystem). The technical goal of Kleptography has been to identify “inner fields” as a way to embed cryptographic values in small bandwidth channel/sub-cryptogram inside a hosting system (RSA, DH based systems, etc.)
All asymmetric backdoors to date, that seamlessly embed an inner subliminal crypto field inside a hosting cryptographic value needed random oracle assumptions. This was used to make the inner value look “almost uniformly random” as part of its hosting random field. It was open whether the need for a random oracle is inherent, or, positively put: is there an algebraic cryptographic ciphertext that is embeddable inside another algebraic cryptographic field “as is”? In this work we achieve this goal for small bandwidth fields. To this end we present a new information hiding primitive that we call a “covert key exchange” that permits provably secure covert communications. Our results surpass previous work since: (1) the bandwidth that the subliminal channel needs is extremely small (bit length of a single compressed elliptic curve point), (2) the error probability of the exchange is negligible, and (3) our results are in the standard model. We use this protocol to implement the first kleptographic (i.e., asymmetric) backdoor in the standard model in RSA key generation and point at other applications. Key properties of the covert key exchange are that (1) both Alice’s message to Bob and their shared secret appear to all efficient algorithms as uniformly random strings from {0,1} k + 1 and {0,1} M , respectively (this is needed for the embedding), and (2) the fastest adversaries of the exchange run in time exponential in k, based on current knowledge (they have to solve DL over e-curves). We achieve this in the standard model based on the ECDDH assumption over a twisted pair of e-curves.
Adam Young, Moti Yung
Provably Secure Convertible Undeniable Signatures with Unambiguity
Abstract
This paper shows some efficient and provably-secure convertible undeniable signature schemes (with both selective conversion and all conversion), in the standard model and discrete logarithm setting. They further satisfy unambiguity, which is traditionally required for anonymous signatures. Briefly, unambiguity means that it is hard to generate a (message, signature) pair which is valid for two different public-keys. In other words, our schemes can be viewed as anonymous signature schemes as well as convertible undeniable signature schemes. Besides other applications, we show that such schemes are very suitable for anonymous auction.
Le Trieu Phong, Kaoru Kurosawa, Wakaha Ogata
History-Free Aggregate Message Authentication Codes
Abstract
Aggregate message authentication codes, as introduced by Katz and Lindell (CT-RSA 2008), combine several MACs into a single value, which has roughly the same size as an ordinary MAC. These schemes reduce the communication overhead significantly and are therefore a promising approach to achieve authenticated communication in mobile ad-hoc networks, where communication is prohibitively expensive. Here we revisit the unforgeability notion for aggregate MACs and discuss that the definition does not prevent “mix-and-match” attacks in which the adversary turns several aggregates into a “fresh” combination, i.e., into a valid aggregate on a sequence of messages which the attacker has not requested before. In particular, we show concrete attacks on the previous scheme.
To capture the broader class of combination attacks, we provide a stronger security notion of aggregation unforgeability. While we can provide stateful transformations lifting (non-ordered) schemes to meet our stronger security notion, for the statefree case we switch to the new notion of history-free sequential aggregation. This notion is somewhat between non-ordered and sequential schemes and basically says that the aggregation algorithm is carried out in a sequential order but must not depend on the preceding messages in the sequence, but only on the shorter input aggregate and the local message. We finally show that we can build an aggregation-unforgeable, history-free sequential MAC scheme based on general assumptions.
Oliver Eikemeier, Marc Fischlin, Jens-Fabian Götzmann, Anja Lehmann, Dominique Schröder, Peter Schröder, Daniel Wagner

Lattice-Based Cryptography

Recursive Lattice Reduction
Abstract
Lattice reduction is known to be a very powerful tool in modern cryptanalysis. In the literature, there are many lattice reduction algorithms that have been proposed with various time complexity (from quadratic to subexponential). These algorithms can be utilized to find a short vector of a lattice with a small norm. Over time, shorter vector will be found by incorporating these methods. In this paper, we take a different approach by presenting a methodology that can be applied to any lattice reduction algorithms, with the implication that enables us to find a shorter vector (i.e. a smaller solution) while requiring shorter computation time. Instead of applying a lattice reduction algorithm to a complete lattice, we work on a sublattice with a smaller dimension chosen in the function of the lattice reduction algorithm that is being used. This way, the lattice reduction algorithm will be fully utilized and hence, it will produce a better solution. Furthermore, as the dimension of the lattice becomes smaller, the time complexity will be better. Hence, our methodology provides us with a new direction to build a lattice that is resistant to lattice reduction attacks. Moreover, based on this methodology, we also propose a recursive method for producing an optimal approach for lattice reduction with optimal computational time, regardless of the lattice reduction algorithm used. We evaluate our technique by applying it to break the lattice challenge by producing the shortest vector known so far. Our results outperform the existing known results and hence, our results achieve the record in the lattice challenge problem.
Thomas Plantard, Willy Susilo
Adaptively Secure Identity-Based Identification from Lattices without Random Oracles
Abstract
We propose a concurrently secure, identity-based identification scheme from lattices. It offers adaptive-identity security in the standard model, quasi optimal online performance, optimal leakage resilience, and its security is based on mild worst-case assumptions in ideal lattices. Our scheme uses an ideal-lattice interpretation of the Bonsai tree concept in lattices (EUROCRYPT 2010), which we call convoluted Bonsai trees. It allows us to build an identity-based identification scheme in a new “static identity” model that is weaker than the standard “adaptive identity” model. We show that both models are equivalent under the existence of Chameleon hash functions.
Markus Rückert

Groups Signatures and Authentication

The Fiat–Shamir Transform for Group and Ring Signature Schemes
Abstract
The Fiat-Shamir (FS) transform is a popular tool to produce particularly efficient digital signature schemes out of identification protocols. It is known that the resulting signature scheme is secure (in the random oracle model) if and only if the identification protocol is secure against passive impersonators. A similar results holds for constructing ID-based signature schemes out of ID-based identification protocols.
The transformation had also been applied to identification protocols with additional privacy properties. So, via the FS transform, ad-hoc group identification schemes yield ring signatures and identity escrow schemes yield group signature schemes. Unfortunately, results akin to those above are not known to hold for these latter settings and the security of the resulting schemes needs to be proved from scratch, or worse, it is often simply assumed.
In this paper we provide the missing foundations for the use of the FS transform in these more complex settings. We start with defining a formal security model for identity escrow schemes (a concept proposed earlier but never rigorously formalized). Our main result constists of necessary and sufficient conditions for an identity escrow scheme to yield (via the FS transform) a secure group signature schemes. In addition, using the similarity between group and ring signature schemes we give analogous results for the latter primitive.
Ming Feng Lee, Nigel P. Smart, Bogdan Warinschi
Get Shorty via Group Signatures without Encryption
Abstract
Group signatures allow group members to anonymously sign messages in the name of a group such that only a dedicated opening authority can reveal the exact signer behind a signature. In many of the target applications, for example in sensor networks or in vehicular communication networks, bandwidth and computation time are scarce resources and many of the existent constructions simply cannot be used. Moreover, some of the most efficient schemes only guarantee anonymity as long as no signatures are opened, rendering the opening functionality virtually useless.
In this paper, we propose a group signature scheme with the shortest known signature size and favorably comparing computation time, whilst still offering a strong and practically relevant security level that guarantees secure opening of signatures, protection against a cheating authority, and support for dynamic groups. Our construction departs from the popular sign-and-encrypt-and-prove paradigm, which we identify as one source of inefficiency. In particular, our proposal does not use standard encryption and relies on re-randomizable signature schemes that hide the signed message so as to preserve the anonymity of signers.
Security is proved in the random oracle model assuming the XDDH, LRSW and SDLP assumptions and the security of an underlying digital signature scheme. Finally, we demonstrate how our scheme yields a group signature scheme with verifier-local revocation.
Patrik Bichsel, Jan Camenisch, Gregory Neven, Nigel P. Smart, Bogdan Warinschi
Group Message Authentication
Abstract
Group signatures is a powerful primitive with many practical applications, allowing a group of parties to share a signature functionality, while protecting the anonymity of the signer. However, despite intensive research in the past years, there is still no fully satisfactory implementation of group signatures in the plain model. The schemes proposed so far are either too inefficient to be used in practice, or their security is based on rather strong, non-standard assumptions.
We observe that for some applications the full power of group signatures is not necessary. For example, a group signature can be verified by any third party, while in many applications such a universal verifiability is not needed or even not desired. Motivated by this observation, we propose a notion of group message authentication, which can be viewed as a relaxation of group signatures. Group message authentication enjoys the group-oriented features of group signatures, while dropping some of the features which are not needed in many real-life scenarios. An example application of group message authentication is an implementation of an anonymous credit card.
We present a generic implementation of group message authentication, and also propose an efficient concrete implementation based on standard assumptions, namely strong RSA and DDH.
Bartosz Przydatek, Douglas Wikström

Cryptographic Protocols II

Fast Secure Computation of Set Intersection
Abstract
A secure set intersection protocol between sender S and receiver R on respective inputs X and Y s.t. |X|,|Y| ≤ n, allows R to learn X ∩ Y while S learns nothing about R’s inputs. In other words it is a secure computation of functionality https://static-content.springer.com/image/chp%3A10.1007%2F978-3-642-15317-4_26/MediaObjects/978-3-642-15317-4_26_IEq1_HTML.png on sets of size at most n. A variant we call adaptive set intersection implements an interactive version of this functionality, which on senders S’s input X allows the receiver R to adaptively make up to n queries y i and learn whether or not y i  ∈ X.
We show that a simple protocol using |X| + 4|Y| modular exponentiations and one round of interaction is a secure computation of the adaptive set intersection functionality against malicious adversaries in the Random Oracle Model (ROM) under a One-More Gap Diffie-Hellman (OMGDH) assumption, i.e. assuming the One-More Diffie-Hellman problem is hard even when the DDH problem is easy. Even though the protocol has only a single round, the corresponding ideal functionality is adaptive because receiver’s queries are efficiently extractable only eventually, rather than during protocol execution. However, under the OMGDH assumption in ROM the set of queries any efficient receiver can make is committed at the time of protocol execution, and hence no efficient adversary can benefit from the adaptive feature of this functionality.
Finally we show that this protocol easily extends to Set Intersection with Data Transfer, which is equivalent to the “Keyword Search” problem, where sender S associates each item xi in X with a data entry di, and R learns all (x i , d i ) pairs such that x i  ∈ Y.
Stanisław Jarecki, Xiaomin Liu
Distributed Private-Key Generators for Identity-Based Cryptography
Abstract
An identity-based encryption (IBE) scheme can greatly reduce the complexity of sending encrypted messages. However, an IBE scheme necessarily requires a private-key generator (PKG), which can create private keys for clients, and so can passively eavesdrop on all encrypted communications. Although a distributed PKG has been suggested as a way to mitigate this key escrow problem for Boneh and Franklin’s IBE scheme, the security of this distributed protocol has not been proven. Further, a distributed PKG has not been considered for any other IBE scheme.
In this paper, we design distributed PKG setup and private key extraction protocols for three important IBE schemes; namely, Boneh and Franklin’s BF-IBE, Sakai and Kasahara’s SK-IBE, and Boneh and Boyen’s \(\mbox{BB}_1\)-IBE. We give special attention to the applicability of our protocols to all possible types of bilinear pairings and prove their IND-ID-CCA security in the random oracle model against a Byzantine adversary. Finally, we also perform a comparative analysis of these protocols and present recommendations for their use.
Aniket Kate, Ian Goldberg

Anonymity

Solving Revocation with Efficient Update of Anonymous Credentials
Abstract
Anonymous credential system promise efficient, ubiquitous access to digital services while preserving user privacy. However, their diffusion is impaired by the lack of efficient revocation techniques. Traditional credential revocation measures based on certificate revocation lists or online certification authorities do not provide privacy and cannot be used in privacy-sensitive contexts. Existing revocation techniques specifically geared towards anonymous credential systems are more involved – for the credential issuer, users, as wells as credential consumers – as users have to prove that their credential is still valid, e.g., not included in a revocation list.
We introduce a novel, non-interactive technique to update issuer-controlled attributes of anonymous credentials. Revocation is implemented by encoding the validity time of a credential into one of these attributes. With the proposed protocol, credential issuers can periodically update valid credentials off-line and publish a small per-credential update value on a public bulletin-board. Users can later download their values and re-validate their credentials to prove possession of a valid credential for the current time period. Our solution outperforms all prior solutions for credential revocation in terms of communication and computational costs for the users and credentials consumers and the issuer’s effort is comparable to the best prior proposals.
Jan Camenisch, Markulf Kohlweiss, Claudio Soriente
Backmatter
Metadata
Title
Security and Cryptography for Networks
Editors
Juan A. Garay
Roberto De Prisco
Copyright Year
2010
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-15317-4
Print ISBN
978-3-642-15316-7
DOI
https://doi.org/10.1007/978-3-642-15317-4

Premium Partner