Skip to main content

2014 | Buch

Cryptology and Network Security

13th International Conference, CANS 2014, Heraklion, Crete, Greece, October 22-24, 2014. Proceedings

herausgegeben von: Dimitris Gritzalis, Aggelos Kiayias, Ioannis Askoxylakis

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 13th International Conference on Cryptology and Network Security, CANS 2014, held in Heraklion, Creete, Greece, in October 2014. The 25 revised full papers presented together with the abstracts of 3 invited talks were carefully reviewed and selected from 86 submissions. The papers cover topics of interest such as encryption; cryptanalysis; malware analysis; and privacy and identification systems as well as various types of network protocol design and analysis work.

Inhaltsverzeichnis

Frontmatter
Bootstrappable Identity-Based Fully Homomorphic Encryption
Abstract
It has been an open problem for a number of years to construct an identity-based fully homomorphic encryption (IBFHE) scheme (first mentioned by Naccache at CHES/CRYPTO 2010). At CRYPTO 2013, Gentry, Sahai and Waters largely settled the problem by presenting leveled IBFHE constructions based on the Learning With Errors problem. However their constructions are not bootstrappable, and as a result, are not “pure” IBFHE schemes. The major challenge with bootstrapping in the identity-based setting is that it must be possible to non-interactively derive from the public parameters an “encryption” of the secret key for an arbitrary identity. All presently-known leveled IBFHE schemes only allow bootstrapping if such an “encryption” of the secret key is supplied out-of-band. In this work, we present a “pure” IBFHE scheme from indistinguishability obfuscation, and extend the result to the attribute-based setting. Our attribute-based scheme is the first to support homomorphic evaluation on ciphertexts with different attributes. Finally, we characterize presently-known leveled IBFHE schemes with a view to developing a “compiler” from a leveled IBFHE scheme to a bootstrappable IBFHE scheme, and sufficient conditions are identified.
Michael Clear, Ciarán McGoldrick
Proxy Re-encryption with Unforgeable Re-encryption Keys
Abstract
Proxy re-encryption (PRE) provides nice solutions to the delegation of decryption rights. In proxy re-encryption, the delegator Alice generates re-encryption keys for a semi-trusted proxy, with which the proxy can translate a ciphertext intended for Alice into a ciphertext for the delegatee Bob of the same plaintext. Existing PRE schemes have considered the security that the collusion attacks among the proxy and the delegatees cannot expose the delegator’s secret key. But almost all the schemes, as far as we know, failed to provide the security that the proxy and the delegatees cannot collude to generate new re-encryption keys from the delegator to any other user who has not been authorized by the delegator. In this paper, we first define the notion of the unforgeability of re-encryption keys to capture the above attacks. Then, we present a non-interactive CPA secure PRE scheme, which is resistant to collusion attacks in forging re-encryption keys. Both the size of the ciphertext and the re-encryption key are constant. Finally, we extend the CPA construction to a CCA secure scheme.
Hui Guo, Zhenfeng Zhang, Jiang Zhang
On the Lossiness of 2 k -th Power and the Instantiability of Rabin-OAEP
Abstract
Seurin (PKC 2014) proposed the 2-Φ/4-hiding assumption which asserts the indistinguishability of Blum Numbers from pseudo Blum Numbers. In this paper, we investigate the lossiness of 2 k -th power based on the 2 k -Φ/4-hiding assumption, which is an extension of the 2-Φ/4-hiding assumption. And we prove that 2 k -th power function is a lossy trapdoor permutation over Quadratic Residuosity group. This new lossy trapdoor function has 2k-bits lossiness for k-bits exponent, while the RSA lossy trapdoor function given by Kiltz et al. (Crypto 2010) has k-bits lossiness for k-bits exponent under Φ-hiding assumption in lossy mode. We modify the square function in Rabin-OAEP by 2 k -th power and show the instantiability of this Modified Rabin-OAEP by the technique of Kiltz et al. (Crypto 2010). The Modified Rabin-OAEP is more efficient than the RSA-OAEP scheme for the same secure bits. With the secure parameter being 80 bits and the modulus being 2048 bits, Modified Rabin-OAEP can encrypt roughly 454 bits of message, while RSA-OAEP can roughly encrypt 274 bits.
Haiyang Xue, Bao Li, Xianhui Lu, Kunpeng Wang, Yamin Liu
Breaking and Fixing Cryptophia’s Short Combiner
Abstract
A combiner is a construction formed out of two hash functions that is secure if one of the underlying functions is. Conventional combiners are known not to support short outputs: if the hash functions have n-bit outputs the combiner should have at least almost 2n bits of output in order to be robust for collision resistance (Pietrzak, CRYPTO 2008). Mittelbach (ACNS 2013) introduced a relaxed security model for combiners and presented “Cryptophia’s short combiner,” a rather delicate construction of an n-bit combiner that achieves optimal collision, preimage, and second preimage security. We re-analyze Cryptophia’s combiner and show that a collision can be found in two queries and a second preimage in one query, invalidating the claimed results. We additionally propose a way to fix the design in order to re-establish the original security results.
Bart Mennink, Bart Preneel
FFT Key Recovery for Integral Attack
Abstract
An integral attack is one of the most powerful attacks against block ciphers. We propose a new technique for the integral attack called the Fast Fourier Transform (FFT) key recovery. When the integral distinguisher uses N chosen plaintexts and the guessed key is k bits, a straightforward key recovery requires the time complexity of O(N 2 k ). However, the FFT key recovery method requires only the time complexity of O(N + k 2 k ). As a previous result using FFT, at ICISC 2007, Collard et al.proposed that FFT can reduce the time complexity of a linear attack. We show that FFT can also reduce the complexity of the integral attack. Moreover, the estimation of the complexity is very simple. We first show the complexity of the FFT key recovery against three structures, the Even-Mansour scheme, a key-alternating cipher, and the Feistel cipher. As examples of these structures, we show integral attacks against Prøst, CLEFIA, and AES. As a result, 8-round Prøst \(\tilde{P}_{128,K}\) can be attacked with about an approximate time complexity of 280. Moreover, a 6-round AES and 12-round CLEFIA can be attacked with approximate time complexities of 252.6 and 287.5, respectively.
Yosuke Todo, Kazumaro Aoki
Message Extension Attack against Authenticated Encryptions: Application to PANDA
Abstract
In this paper, a new cryptanalysis approach for a class of authenticated encryption schemes is presented, which is inspired by the previous length extension attack against hash function based MACs. The approach is called message extension attack. The target class is the schemes that initialize the internal state with nonce and key, update the state by associated data and message, extract key stream from the state, and finally generate a tag from the updated state. A forgery attack can be mounted in the nonce-repeating model in the chosen-plaintext scenario when a function to update the internal state is shared for processing the message and generating the tag. The message extension attack is then applied to PANDA, which is a dedicated authenticated encryption design submitted to CAESAR. An existential forgery attack is mounted with 25 chosen plaintexts, 264 computations, and a negligible memory, which breaks the claimed 128-bit security for the nonce-repeating model. This is the first result that breaks the security claim of PANDA.
Yu Sasaki, Lei Wang
New Second Preimage Attack Variants against the MD-Structure
Abstract
We consider a situation where the adversary performs a second preimage attack and is able to influence slightly the preconditions under which the iterated hash function is used. In the first variant of the attack, the adversary is able to choose the initial value of the hash function after receiving the original message. In the second variant, the adversary is allowed to determine a prefix of the original message and has to create a second preimage with the same prefix. Both of these attacks use diamond structures and the expected number of compression function calls required to complete each of them successfully is in \(\mathrm O(\sqrt{n} \cdot 2^{\frac{2n}{3}})\) while on random oracle hash function it is in \(\mathrm O(2^n)\). We also show that it is possible to decrease the before mentioned expected value to \(\mathrm O(2^{\frac{2n-l}{3}})\) if the length of the original message is 2 l and l is sufficiently large. Furthermore, we generalize these attacks to work against concatenated hash functions as well.
Tuomas Kortelainen, Juha Kortelainen
Negotiating DNSSEC Algorithms over Legacy Proxies
Abstract
To ensure best security and efficiency, cryptographic protocols should allow parties to negotiate the use of the ‘best’ cryptographic algorithms supported by the different parties; this is usually referred to as cipher-suite negotiation, and considered an essential feature of such protocols, e.g., TLS and IPsec. However, such negotiation is absent from protocols designed for distribution of cryptographically-signed objects, such as DNSSEC. One reason may be the challenges of securing the choice of the ‘best’ algorithm, especially in the presence of intermediate ‘proxies’ (crucial for performance), and in particular, providing solutions, compatible with the existing legacy servers and proxies; another reason may be a lack of understanding of the security and performance damages due to lack of negotiation.
We show that most DNSSEC signed domains, support only RSA 1024-bit signatures, which are considered insecure, and are also larger than alternatives; the likely reason is lack of negotiation mechanisms. We present a DNSSEC-negotiation mechanism, allowing name-servers to send responses containing only the keys and signatures required by the requesting resolver. Our design is compatible with intermediary proxies, and even with legacy proxies, that do not support our negotiation mechanism. We show that our design enables incremental deployment and will have negligible performance impact on overhead of DNSSEC as currently deployed, and significant improved performance to DNSSEC if more domains support multiple algorithms; we also show significant security benefits from the use of our design, under realistic, rational adoption model. Ideas of our design apply to other systems requiring secure and efficient distribution of signed data, such as wireless sensor networks (WSNs).
Amir Herzberg, Haya Shulman
A Censorship-Resistant, Privacy-Enhancing and Fully Decentralized Name System
Abstract
The Domain Name System (DNS) is vital for access to information on the Internet. This makes it a target for attackers whose aim is to suppress free access to information. This paper introduces the design and implementation of the GNU Name System (GNS), a fully decentralized and censorship-resistant name system. GNS provides a privacy-enhancing alternative to DNS which preserves the desirable property of memorable names. Due to its design, it can also double as a partial replacement of public key infrastructures, such as X.509. The design of GNS incorporates the capability to integrate and coexist with DNS. GNS is based on the principle of a petname system and builds on ideas from the Simple Distributed Security Infrastructure (SDSI), addressing a central issue with the decentralized mapping of secure identifiers to memorable names: namely the impossibility of providing a global, secure and memorable mapping without a trusted authority. GNS uses the transitivity in the SDSI design to replace the trusted root with secure delegation of authority, thus making petnames useful to other users while operating under a very strong adversary model. In addition to describing the GNS design, we also discuss some of the mechanisms that are needed to smoothly integrate GNS with existing processes and procedures in Web browsers. Specifically, we show how GNS is able to transparently support many assumptions that the existing HTTP(S) infrastructure makes about globally unique names.
Matthias Wachs, Martin Schanzenbach, Christian Grothoff
Universally Composable Oblivious Transfer Based on a Variant of LPN
Abstract
Oblivious transfer (OT) is a fundamental two-party cryptographic primitive that implies secure multiparty computation. In this paper, we introduce the first OT based on the Learning Parity with Noise (LPN) problem. More specifically, we use the LPN variant that was introduced by Alekhnovich (FOCS 2003). We prove that our protocol is secure against active static adversaries in the Universal Composability framework in the common reference string model. Our constructions are based solely on a LPN style assumption and thus represents a clear next step from current code-based OT protocols, which require an additional assumption related to the indistinguishability of public keys from random matrices. Our constructions are inspired by the techniques used to obtain OT based on the McEliece cryptosystem.
Bernardo David, Rafael Dowsley, Anderson C. A. Nascimento
Converting PKI-Based Authenticated Key Exchange to Identity-Based
Abstract
Fiore and Gennaro proposed an identity-based authenticated key exchange (ID-AKE) scheme without pairing. Though their scheme is very efficient both in communication and computation, the scheme is not secure against some advanced exposure attacks. In this paper, we achieve exposure-resilient ID-AKE schemes without pairings. Specifically, we introduce two security preserving generic conversions from ordinary PKI-based AKE (PKI-AKE) to ID-AKE (i.e., exposure resilience of PKI-AKE is preserved in converted ID-AKE). Our first conversion is for the post-specified peer model (i.e., the peer can be unknown at the beginning of the protocol), and our second conversion is for the pre-specified peer model (i.e., the peer must be fixed at the beginning of the protocol). The merit of the first conversion is round-preserving (i.e., converted ID-AKE has same round complexity as PKI-AKE). The merit of the second conversion is rich instantiability (i.e., it can be instantiated from various kinds of number-theoretic assumptions such as RSA and lattices as well as Diffie-Hellman variants) thanks to rich instantiability of known PKI-AKE schemes in the pre-specified peer model.
Koutarou Suzuki, Kazuki Yoneyama
Proving Correctness and Security of Two-Party Computation Implemented in Java in Presence of a Semi-honest Sender
Abstract
We provide a proof of correctness and security of a two-party-computation protocol based on garbled circuits and oblivious transfer in the presence of a semi-honest sender. To achieve this we are the first to combine a machine-assisted proof of correctness with advanced cryptographic primitives to prove security properties of Java code. The machine-assisted part of the proof is conducted with KeY, an interactive theorem prover.
The proof includes a correctness result for the construction and evaluation of garbled circuits. This is particularly interesting since checking such an implementation by hand would be very tedious and error-prone. Although we stick to the secure two-party-computation of an n-bit AND in this paper, our approach is modular, and we explain how our techniques can be applied to other functions.
To prove the security of the protocol for an honest-but-curious sender and an honest receiver, we use the framework presented by Küsters et al. for the cryptographic verification of Java programs. As part of our work, we add oblivious transfer to the set of cryptographic primitives supported by the framework. This is a general contribution beyond our results for concrete Java code.
Florian Böhl, Simon Greiner, Patrik Scheidecker
Mining API Calls and Permissions for Android Malware Detection
Abstract
The popularity of Android platform is increasing very sharply due to the large market share of Android and openness in nature. The increased popularity is making Android an enticing target for malwares. A worrying trend that is alarming is the increasing sophistication of Android malware to evade detection by traditional signature based scanners. Several approaches have been proposed in literature for Android malware detection. However, most of them are less effective in terms of true positive rate and involves computational overheads. In this paper, we propose an effective approach to attenuate the problem of Android malware detection using static code analysis based models. The proposed models, in this paper, are built to capture features relevant to malware behaviour based on API calls as well as permissions present in various Android applications. Thereafter, models are evaluated using Naive Bayesian as well as K-Nearest Neighbour classifiers. Proposed models are able to detect real malwares in the wild and achieve an accuracy of 95.1% and true positive rate with highest value one.
Akanksha Sharma, Subrat Kumar Dash
Direct Anonymous Attestations with Dependent Basename Opening
Abstract
We introduce a new privacy-friendly cryptographic primitive we call Direct Anonymous Attestations with Dependent Basename Opening (DAA-DBO). Such a primitive is a Direct Anonymous Attestation in which the anonymity can be revoked only if a specific authority, called the admitter, allowed to revoke the DAA signatures that include a specific basename. We also present an efficient scheme that achieves this functionality, secure in the random oracle model. Furthermore, we provide a prototype implementation of an anonymous transit pass system, based on this new primitive. Compared to previous privacy-friendly cryptographic primitives with partial linkability, we provide a way to share the power to open signatures between two entities which is more practical than the use of conventional techniques from threshold cryptography.
Nicolas Desmoulins, Roch Lescuyer, Olivier Sanders, Jacques Traoré
A Storage-Efficient and Robust Private Information Retrieval Scheme Allowing Few Servers
Abstract
Since the concept of locally decodable codes was introduced by Katz and Trevisan in 2000 [11], it is well-known that information theoretically secure private information retrieval schemes can be built using locally decodable codes [15]. In this paper, we construct a Byzantine robust PIR scheme using the multiplicity codes introduced by Kopparty et al. [12]. Our main contributions are on the one hand to avoid full replication of the database on each server; this significantly reduces the global redundancy. On the other hand, to have a much lower locality in the PIR context than in the LDC context. This shows that there exists two different notions: LDC-locality and PIR-locality. This is made possible by exploiting geometric properties of multiplicity codes.
Daniel Augot, Françoise Levy-dit-Vehel, Abdullatif Shikfa
Should Silence be Heard? Fair Rational Secret Sharing with Silent and Non-silent Players
Abstract
Parties in a rational secret sharing protocol may use mobile devices which are severely resource-constrained. Therefore, it may be in the interest of such parties to try to obtain the secret while spending as little as possible on communication and computation. This preference is different from a traditional rational player and is similar to freeriding. We call such players ‘silent’. The traditional rational player is represented as a ‘non-silent’ player and we modify its preference to incorporate the fact that 1) it is indifferent between incurring a cost and not incurring a cost when everybody is able to reconstruct the secret and 2) it prefers that nobody obtains the secret over some players obtaining the secret free-of-cost while others incur a cost in reconstructing the secret. We thus introduce a mixed-utility model consisting of the utility of obtaining the secret and the cost of computation in order to obtain the secret. We propose new rational secret reconstruction protocols in the simultaneous channel model for both online and offline dealer scenario, that satisfy a new notion of fairness which we call cost-aware complete fairness, in the presence of both silent and non-silent players. Our protocol with the offline dealer makes use of a simplified version of the Boneh-Gentry-Waters [21] broadcast encryption scheme. Both types of parties find it to be in (Bayesian) computational Nash Equilibrium to follow our protocols and the protocols are \((\lceil\frac{t}{2}\rceil-1)\) resilient for non-silent players.
Sourya Joyee De, Sushmita Ruj, Asim K. Pal
Attribute-Based Signatures with User-Controlled Linkability
Abstract
In this paper, we introduce Attribute-Based Signatures with User-Controlled Linkability (ABS-UCL). Attribute-based signatures allow a signer who has enough credentials/attributes to anonymously sign a message w.r.t. some public policy revealing neither the attributes used nor his identity. User-controlled linkability is a new feature which allows a user to make some of his signatures directed at the same recipient linkable while still retaining anonymity. Such a feature is useful for many real-life applications. We give a general framework for constructing ABS-UCL and present an efficient instantiation of the construction that supports multiple attribute authorities.
Ali El Kaafarani, Liqun Chen, Essam Ghadafi, James Davenport
Towards a Full-Featured Implementation of Attribute Based Credentials on Smart Cards
Abstract
Attribute-based Credentials (ABCs) allow citizens to prove certain properties about themselves without necessarily revealing their full identity. Smart cards are an attractive container for such credentials, for security and privacy reasons. But their limited processing power and random access storage capacity pose a severe challenge. Recently, we, the IRMA team, managed to fully implement a limited subset of the Idemix ABC system on a smart card, with acceptable running times. In this paper we extend this functionality by overcoming the main hurdle: limited RAM. We implement an efficient extended Pseudo-Random Number Generator (PRNG) for recomputing pseudorandomness and reconstructing variables. Using this we implement Idemix standard and domain pseudonyms, AND proofs based on prime-encoded attributes, and equality proofs of representation modulo a composite, together with terminal verification and secure messaging. In contrast to prior work that only addressed the verification of one credential with only one attribute (particularly, the master secret), we can now perform multi-credential proofs on credentials of 5 attributes and complex proofs in reasonable time. We provide a detailed performance analysis and compare our results to other approaches.
Antonio de la Piedra, Jaap-Henk Hoepman, Pim Vullers
Security of a Privacy-Preserving Biometric Authentication Protocol Revisited
Abstract
Biometric authentication establishes the identity of an individual based on biometric templates (e.g. fingerprints, retina scans etc.). Although biometric authentication has important advantages and many applications, it also raises serious security and privacy concerns. Here, we investigate a biometric authentication protocol that has been proposed by Bringer et al. and adopts a distributed architecture (i.e. multiple entities are involved in the authentication process). This protocol was proven to be secure and privacy-preserving in the honest-but-curious (or passive) attack model. We present an attack algorithm that can be employed to mount a number of attacks on the protocol under investigation. We then propose an improved version of the Bringer et al. protocol that is secure in the malicious (or active) insider attack model and has forward security.
Aysajan Abidin, Kanta Matsuura, Aikaterini Mitrokotsa
Private and Dynamic Time-Series Data Aggregation with Trust Relaxation
Abstract
With the advent of networking applications collecting user data on a massive scale, the privacy of individual users appears to be a major concern. The main challenge is the design of a solution that allows the data analyzer to compute global statistics over the set of individual inputs that are protected by some confidentiality mechanism. Joye et al. [7] recently suggested a solution that allows a centralized party to compute the sum of encrypted inputs collected through a smart metering network. The main shortcomings of this solution are its reliance on a trusted dealer for key distribution and the need for frequent key updates. In this paper we introduce a secure protocol for aggregation of time-series data that is based on the Joye et al. [7] scheme and in which the main shortcomings of the latter, namely, the requirement for key updates and for the trusted dealer are eliminated. Moreover our scheme supports a dynamic group management, whereby as opposed to Joye et al. [7] leave and join operations do not trigger a key update at the users.
Iraklis Leontiadis, Kaoutar Elkhiyaoui, Refik Molva
Privacy-Enhanced Participatory Sensing with Collusion Resistance and Data Aggregation
Abstract
Participatory sensing enables new paradigms and markets for information collection based on the ubiquitous availability of smartphones, but also introduces privacy challenges for participating users and their data. In this work, we review existing security models for privacy-preserving participatory sensing and propose several improvements that are both of theoretical and practical significance.
We first address an important drawback of prior work, namely the lack of consideration of collusion attacks that are highly relevant for such multi-user settings. We explain why existing security models are insufficient and why previous protocols become insecure in the presence of colluding parties. We remedy this problem by providing new security and privacy definitions that guarantee meaningful forms of collusion resistance. We propose new collusion-resistant participatory sensing protocols satisfying our definitions: a generic construction that uses anonymous identity-based encryption (IBE) and its practical instantiation based on the Boneh-Franklin IBE scheme.
We then extend the functionality of participatory sensing by adding the ability to perform aggregation on the data submitted by the users, without sacrificing their privacy. We realize this through an additively-homomorphic IBE scheme which in turn is constructed by slightly modifying the Boneh-Franklin IBE scheme. From a practical point of view, the resulting scheme is suitable for calculations with small sensor readings/values such as temperature measurements, noise levels, or prices, which is sufficient for many applications of participatory sensing.
Felix Günther, Mark Manulis, Andreas Peter
Short Comparable Encryption
Abstract
The notion of comparable encryption is introduced in Esorics 2013 [18] which overcomes the weakness of order-preserving encryption (OPE). While an OPE enables to compare the numerical order of numbers from their corresponding ciphertexts alone, the comparable encryption enables to compare the numerical order of the pair of numbers from their ciphertexts if either of the ciphertexts is accompanied with the corresponding token. Hence, it significantly reduces the amount of disclosed knowledge with respect to encrypted numbers from their ciphertexts. Since an OPE is considered to be a key primitive for encrypted databases such as CryptDB [31] and Monomi [36], a comparable encryption has a potential to enhance the security of these applications. However, the previous comparable encryption requires large ciphertext length, which so severely spoils the performance of encrypted databases that it is no longer practical. We propose in this paper, a very short comparable encryption. While each bit is encrypted into a string of security parameter length, say 160 bits, in the previous works, ours encrypts each bit into 3-ary. This is even shorter than the ciphertext length of OPEs.
Jun Furukawa
Decentralized Distributed Data Usage Control
Abstract
Data usage control provides mechanisms for data owners to remain in control over how their data is used after it is has been shared. Many data usage policies can only be enforced on a global scale, as they refer to data usage events happening within multiple distributed systems: ‘not more than three employees may ever read this document’, or ‘no copy of this document may be modified after it has been archived’. While such global policies can be enforced by a centralized enforcement infrastructure that observes all data usage events in all relevant systems, such a strategy involves heavy communication. We show how the overall coordination overhead can be reduced by deploying a decentralized enforcement infrastructure. Our contributions are: (i) a formal distributed data usage control system model; (ii) formal methods for identifying all systems relevant for evaluating a given policy; (iii) identification of situations in which no coordination between systems is necessary without compromising policy enforcement; (iv) proofs of correctness of (ii, iii).
Florian Kelbert, Alexander Pretschner
Efficient Signatures with Tight Real World Security in the Random-Oracle Model
Abstract
Security for digital signature schemes is most commonly analyzed in an ideal single user setting where the attacker is provided only with a single public key. However, when digital signature schemes are deployed in practice they are often used by many users, each having its own public key, e.g., in authenticated key exchange (AKE) protocols. Common security models for AKE model real world capabilities of an adversary by allowing it (among others) to corrupt secret user keys. For digital signatures it is well known that security in the idealized single user setting implies security in this stronger and more realistic multi user setting with corruptions. However, the security reduction loses a factor which is linear in the number of users. It is not clear how to avoid this loss in general.
In this paper we propose an efficient signature scheme whose security reduction in the above setting is tight. The security reduction loses a factor of about 2. When 80 bits of security are required our signatures are of size roughly 2700 bits.
Christoph Bader
More Sparse Families of Pairing-Friendly Elliptic Curves
Abstract
Generating pairing-friendly elliptic curves is a crucial step in the deployment of pairing-based cryptographic applications. The most efficient method for their construction is based on polynomial families, namely complete families, complete families with variable discriminant and sparse families. In this work we further study the case of sparse families which seem to produce more pairing-friendly elliptic curves than the other two polynomial families and also can lead to better ρ-values in many cases. We present two general methods for producing sparse families and we apply them for four embedding degrees \(k \in \lbrace 5, 8, 10, 12 \rbrace\). Particularly for k = 5 we introduce for the first time the use of Pell equations by setting a record with ρ = 3/2 and we present a family that has better chances in producing suitable curve parameters than any other reported family for \(k \notin \lbrace 3, 4, 6 \rbrace\). In addition we generalise some existing examples of sparse families for k = 8, 12 and provide extensive experimental results for every new sparse family for \(k \in \lbrace 5, 8, 10, 12 \rbrace\) regarding the number of the constructed elliptic curve parameters.
Georgios Fotiadis, Elisavet Konstantinou
Backmatter
Metadaten
Titel
Cryptology and Network Security
herausgegeben von
Dimitris Gritzalis
Aggelos Kiayias
Ioannis Askoxylakis
Copyright-Jahr
2014
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-12280-9
Print ISBN
978-3-319-12279-3
DOI
https://doi.org/10.1007/978-3-319-12280-9