Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 5th International Conference on the Theory and Application of Cryptographic Techniques in Africa, AFRICACRYPT 2011, held in Ifrane, Morocco, in July 2012. The 24 papers presented together with abstracts of 2 invited talks were carefully reviewed and selected from 56 submissions. They are organized in topical sections on signature schemes, stream ciphers, applications of information theory, block ciphers, network security protocols, public-key cryptography, cryptanalysis of hash functions, hash functions: design and implementation, algorithms for public-key cryptography, and cryptographic protocols.



Signature Schemes

Batch Verification of ECDSA Signatures

In this paper, we study several algorithms for batch verification of ECDSA signatures. The first of these algorithms is based upon the naive idea of taking square roots in the underlying field. We also propose two new and efficient algorithms which replace square-root computations by symbolic manipulations. Experiments carried out on NIST prime curves demonstrate a maximum speedup of above six over individual verification if all the signatures in the batch belong to the same signer, and a maximum speedup of about two if the signatures in the batch belong to different signers, both achieved by a fast variant of our second symbolic-manipulation algorithm. In terms of security, all the studied algorithms are equivalent to standard ECDSA* batch verification. These algorithms are practical only for small (≤ 8) batch sizes. To the best of our knowledge, this is the first reported study on the batch verification of original ECDSA signatures.
Sabyasachi Karati, Abhijit Das, Dipanwita Roychowdhury, Bhargav Bellur, Debojyoti Bhattacharya, Aravind Iyer

Extended Security Arguments for Signature Schemes

The well-known forking lemma by Pointcheval and Stern has been used to prove the security of the so-called generic signature schemes. These signature schemes are obtained via the Fiat-Shamir transform from three-pass identification schemes. A number of five-pass identification protocols have been proposed in the last few years. Extending the forking lemma and the Fiat-Shamir transform would allow to obtain new signature schemes since, unfortunately, these newly proposed schemes fall outside the original framework. In this paper, we provide an extension of the forking lemma in order to assess the security of what we call n-generic signature schemes. These include signature schemes that are derived from certain (2n + 1)-pass identification schemes. We thus obtain a generic methodology for proving the security of a number of signature schemes derived from recently published five-pass identification protocols, and potentially for (2n + 1)-pass identification schemes to come.
Sidi Mohamed El Yousfi Alaoui, Özgür Dagdelen, Pascal Véron, David Galindo, Pierre-Louis Cayrel

Sanitizable Signatures with Several Signers and Sanitizers

Sanitizable signatures allow a signer of a message to give one specific receiver, called a sanitizer, the power to modify some designated parts of the signed message. Most of the existing constructions consider one single signer giving such a possibility to one single sanitizer. In this paper, we formalize the concept with n signers and m sanitizers, taking into account recent models (for 1 signer and 1 sanitizer) on the subject. We next give a generic construction based on the use of both group signatures and a new cryptographic building block, called a trapdoor or proof, that may be of independent interest.
Sébastien Canard, Amandine Jambert, Roch Lescuyer

Stream Ciphers

Attack Based on Direct Sum Decomposition against the Nonlinear Filter Generator

The nonlinear filter generator (NLFG) is a powerful building block commonly used in stream ciphers. In this paper, we present the direct sum decomposition of the NLFG output sequence that leads to a system of linear equations in the initial state of the NLFG and further to an efficient algebraic attack. The coefficients of the equation system rely only on the NLFG structure. The attack is operated in an online/offline manner, doing most of the work (determining the coefficients of the equation system) in the offline phase. Thus the online phase is very fast, requiring only four multiplications and one diagonalization of n×n matrices.
Compared with related works, our attack has the advantages in both online computation cost and success probability. On the one hand, far fewer output bits and significantly less matrix computation are required in our attack, although the online computation complexity O(LC) (LC is the linear complexity of the output sequence) is the same as in the known Rønjom-Helleseth attack. On the other hand, the success probability of the attack is analyzed in this paper, different from most prior work. The success probability of this algebraic attack is \(1-2^{-\phi(2^n-1)}\) (φ(·) is the Euler function), which is much greater than 1 − 2− n , the success probability of the Rønjom-Helleseth attack.
Jingjing Wang, Xiangxue Li, Kefei Chen, Wenzheng Zhang

Applications of Information Theory

Fuzzy Vault for Multiple Users

We introduce an extension of the Fuzzy Vault scheme to support multiple users. Namely, different participants might share the same vault to store their secret. In the classical Fuzzy Vault, a user locks a secret using a set A and one can unlock it by providing another set B which overlaps substantially with A. In our extension, our Extended Fuzzy Vault is locked thanks to different sets A j , j = 1,…,l and can be unlocked with a set B with enough common elements with one of these A j . Our results are based on Folded Reed-Solomon Codes and their recent list recovery algorithm. This way, our idea can be interpreted as a natural extension of the list decoding of the Reed-Solomon Codes to list recovery. We give a security analysis of our proposal relying on Secure Sketches security properties to gauge our results. Finally, we provide details on an implementation to support our ideas.
Julien Bringer, Hervé Chabanne, Mélanie Favre

Bounds and Constructions for 1-Round (0,δ)-Secure Message Transmission against Generalized Adversary

In the Secure Message Transmission (SMT) problem, a sender \(\cal S\) is connected to a receiver \(\cal R\) through n node-disjoint paths in the network, a subset of which are controlled by an adversary with unlimited computational power. \(\cal{S}\) wants to send a message m to \(\cal{R}\) in a private and reliable way. Constructing secure and efficient SMT protocols against a threshold adversary who can corrupt at most t out of n wires, has been extensively researched. However less is known about SMT problem for a generalized adversary who can corrupt one out of a set of possible subsets.
In this paper we focus on 1-round (0,δ)-SMT protocols where privacy is perfect and the chance of protocol failure (receiver outputting NULL) is bounded by δ. These protocols are especially attractive because of their possible practical applications.
We first show an equivalence between secret sharing with cheating and canonical 1-round (0, δ)-SMT against a generalized adversary. This generalizes a similar result known for threshold adversaries. We use this equivalence to obtain a lower bound on the communication complexity of canonical 1-round (0, δ)-SMT against a generalized adversary. We also derive a lower bound on the communication complexity of a general 1-round (0, 0)-SMT against a generalized adversary.
We finally give a construction using a linear secret sharing scheme and a special type of hash function. The protocol has almost optimal communication complexity and achieves this efficiency for a single message (does not require block of message to be sent).
Reihaneh Safavi-Naini, Mohammed Ashraful Alam Tuhin

Improving the Performance of the SYND Stream Cipher

In 2007, Gaborit et al. proposed the stream cipher SYND as an improvement of the pseudo random number generator due to Fischer and Stern. This work shows how to improve considerably the efficiency the SYND cipher without using the so-called regular encoding and without compromising the security of the modified SYND stream cipher. Our proposal, called XSYND, uses a generic state transformation which is reducible to the Regular Syndrome Decoding problem (RSD), but has better computational characteristics than the regular encoding. A first implementation shows that XSYND runs much faster than SYND for a comparative security level (being more than three times faster for a security level of 128 bits, and more than 6 times faster for 400-bit security), though it is still only half as fast as AES in counter mode. Parallel computation may yet improve the speed of our proposal, and we leave it as future research to improve the efficiency of our implementation.
Mohammed Meziani, Gerhard Hoffmann, Pierre-Louis Cayrel

Block Ciphers

Impossible Differential Cryptanalysis of the Lightweight Block Ciphers TEA, XTEA and HIGHT

TEA, XTEA and HIGHT are lightweight block ciphers with 64-bit block sizes and 128-bit keys. The round functions of the three ciphers are based on the simple operations XOR, modular addition and shift/rotation. TEA and XTEA are Feistel ciphers with 64 rounds designed by Needham and Wheeler, where XTEA is a successor of TEA, which was proposed by the same authors as an enhanced version of TEA. HIGHT, which is designed by Hong et al., is a generalized Feistel cipher with 32 rounds. These block ciphers are simple and easy to implement but their diffusion is slow, which allows us to find some impossible properties.
This paper proposes a method to identify the impossible differentials for TEA and XTEA by using the weak diffusion, where the impossible differential comes from a bit contradiction. Our method finds a 14-round impossible differential of XTEA and a 13-round impossible differential of TEA, which result in impossible differential attacks on 23-round XTEA and 17-round TEA, respectively. These attacks significantly improve the previous impossible differential attacks on 14-round XTEA and 11-round TEA given by Moon et al. from FSE 2002. For HIGHT, we improve the 26-round impossible differential attack proposed by Özen et al.; an impossible differential attack on 27-round HIGHT that is slightly faster than the exhaustive search is also given.
Jiazhe Chen, Meiqin Wang, Bart Preneel

Three-Subset Meet-in-the-Middle Attack on Reduced XTEA

This paper presents an improved single-key attack on a block-cipher XTEA by using the three-subset meet-in-the-middle (MitM) attack. Firstly, a technique on a generic block-cipher is discussed. It points out that the previous work applying the splice-and-cut technique to the three-subset MitM attack contains incomplete arguments, and thus it requires a very large data complexity, which is close to the code book. This paper gives a corrected procedure to keep the data complexity small. Secondly, the three-subset MitM attack is applied for reduced-round XTEA, which is a 64-bit block-cipher with 64-round Feistel network and a 128-bit key. 25 rounds are attacked with 9 known plaintexts and 2120.40 XTEA computations, while the previous best single-key attack only reaches 23 rounds. In the chosen-plaintext model, the attack is extended to 28 rounds with 237 chosen-plaintexts and 2120.38 computations.
Yu Sasaki, Lei Wang, Yasuhide Sakai, Kazuo Sakiyama, Kazuo Ohta

Differential Cryptanalysis of Reduced-Round ICEBERG

ICEBERG is proposed by Standaert et al. in FSE 2004 for reconfigurable hardware implementations. It uses 64-bit block size and 128-bit key and the round number is 16. Specially, it is a SPN block cipher and all components are involutional and allow very efficient combinations of encryption/decryption. In this paper, we propose an elaborate method to identify the 6-round differentials and present the differential attack on 7-round ICEBERG with 257 chosen plaintexts and 290.28 7-round encryptions. Then we use multiple differentials to attack 8-round ICEBERG with 263 chosen plaintexts and 296 8-round encryptions. The previous linear cryptanalysis can only attack 7-round ICEBERG with the whole codebook. It means that ICEBERG is more resistant to linear cryptanalysis than differential cryptanalysis. Although our attack cannot threat ICEBERG, we give the best attack for ICEBERG published to date and our elaborate method to identify multiple differential can be used for other similar block ciphers.
Yue Sun, Meiqin Wang, Shujia Jiang, Qiumei Sun

Compact Implementation and Performance Evaluation of Block Ciphers in ATtiny Devices

The design of lightweight block ciphers has been a very active research topic over the last years. However, the lack of comparative source codes generally makes it hard to evaluate the extent to which implementations of different ciphers actually reach their low-cost goals on various platforms. This paper reports on an initiative aiming to relax this issue. First, we provide implementations of 12 block ciphers on an ATMEL AVR ATtiny45 8-bit microcontroller, and make the corresponding source code available on a web page. All implementations are made public under an open-source license. Common interfaces and design goals are followed by all designers to achieve comparable implementation results. Second, we evaluate performance figures of our implementations with respect to different metrics, including energy-consumption measurements and show our improvements compared to existing implementations.
Thomas Eisenbarth, Zheng Gong, Tim Güneysu, Stefan Heyse, Sebastiaan Indesteege, Stéphanie Kerckhof, François Koeune, Tomislav Nad, Thomas Plos, Francesco Regazzoni, François-Xavier Standaert, Loic van Oldeneel tot Oldenzeel

Network Security Protocols

Cryptanalysis of Enhanced TTS, STS and All Its Variants, or: Why Cross-Terms Are Important

We show that the two multivariate signature schemes Enhanced STS, proposed at PQCrypto 2010, and Enhanced TTS, proposed at ACISP 2005, are vulnerable due to systematically missing cross-terms. To this aim, we generalize equivalent keys to so-called good keys for an improved algebraic key recovery attack. In particular, we demonstrate that it is impossible to choose both secure and efficient parameters for Enhanced STS and break all current parameters of both schemes.
Since 2010, many variants of Enhanced STS, such as Check Equations or Hidden Pair of Bijections were proposed. We break all these variants and show that making STS secure will either lead to a variant known as the Oil, Vinegar and Salt signature scheme or, if we also require the signing algorithm to be efficient, to the well-known Rainbow signature scheme. We show that our attack is more efficient than any previously known attack.
Enrico Thomae, Christopher Wolf

A Complementary Analysis of the (s)YZ and DIKE Protocols

The Canetti–Krawczyk (CK) model remains widely used for the analysis of key agreement protocols. We recall the CK model, and its variant used for the analysis of the HMQV protocol, the CK\(_\text{HMQV}\) model; we recall also some of the limitations of these models. Next, we show that the (s)YZ protocols do not achieve their claimed CK\(_\text{HMQV}\) security. Furthermore, we show that they do not achieve their claimed computational fairness. Our attack suggests that no two–pass key establishment protocol can achieve this attribute. We show also that the Deniable Internet Key Exchange fails in authentication; this illustrates the inability of capturing some impersonation attacks in the CK model. Besides, we propose a secure, efficient, and deniable protocol, geared to the post peer specified model.
Augustin P. Sarr, Philippe Elbaz–Vincent

Public-Key Cryptography

A New Attack on RSA and CRT-RSA

In RSA, the public modulus N = pq is the product of two primes of the same bit-size, the public exponent e and the private exponent d satisfy \(ed\equiv 1 \pmod{(p - 1)(q - 1)}\). In many applications of RSA, d is chosen to be small. This was cryptanalyzed by Wiener in 1990 who showed that RSA is insecure if d < N 0.25. As an alternative, Quisquater and Couvreur proposed the CRT-RSA scheme in the decryption phase, where \(d_p = d \pmod{(p - 1)}\) and \(d_q = d \pmod{(q - 1)}\) are chosen significantly smaller than p and q. In 2006, Bleichenbacher and May presented an attack on CRT-RSA when the CRT-exponents d p and d q are both suitably small. In this paper, we show that RSA is insecure if the public exponent e satisfies an equation \(ex+y\equiv 0\pmod p\) with \(|x||y|<N^{\frac{\sqrt{2}-1}{2}}\) and \(ex+y\not\equiv 0\pmod N\). As an application of our new attack, we present the cryptanalysis of CRT-RSA if one of the private exponents, d p say, satisfies \(d_p<\frac{N^\frac{\sqrt{2}}{4}}{\sqrt{e}}\). This improves the result of Bleichenbacher and May on CRT-RSA where both d p and d q are required to be suitably small.
Abderrahmane Nitaj

Shift-Type Homomorphic Encryption and Its Application to Fully Homomorphic Encryption

This work addresses the characterization of homomorphic encryption schemes both in terms of security and design. In particular, we are interested in currently existing fully homomorphic encryption (FHE) schemes and their common structures and security. Our main contributions can be summarized as follows:
  • We define a certain type of homomorphic encryption that we call shift-type and identify it as the basic underlying structure of all existing homomorphic encryption schemes. It generalizes the already known notion of shift-type group homomorphic encryption.
  • We give an IND-CPA characterization of all shift-type homomorphic encryption schemes in terms of an abstract subset membership problem.
  • We show that this characterization carries over to all leveled FHE schemes that arise by applying Gentry’s bootstrapping technique to shift-type homomorphic encryption schemes. Since this is the common structure of all existing schemes, our result actually characterizes the IND-CPA security of all existing bootstrapping-based leveled FHE.
  • We prove that the IND-CPA security of FHE schemes that offer a certain type of circuit privacy (for FHE schemes with a binary plaintext space we require circuit privacy for a single AND-gate and, in fact, all existing binary-plaintext FHE schemes offer this) and are based on Gentry’s bootstrapping technique is equivalent to the circular security of the underlying bootstrappable scheme.
Frederik Armknecht, Stefan Katzenbeisser, Andreas Peter

Cryptanalysis of Hash Functions

The Collision Security of MDC-4

There are four somewhat classical double length block cipher based compression functions known: MDC-2, MDC-4, Abreast-DM, and Tandem-DM. They all have been developed over 20 years ago. In recent years, cryptographic research has put a focus on block cipher based hashing and found collision security results for three of them (MDC-2, Abreast-DM, Tandem-DM ). In this paper, we add MDC-4, which is part of the IBM CLiC cryptographic module, to that list by showing that – ’instantiated’ using an ideal block cipher with 128 bit key/plaintext/ciphertext size – no adversary asking less than 274.76 queries can find a collision with probability greater than 1/2. This is the first result on the collision security of the hash function MDC-4.
The compression function MDC-4 is created by interconnecting two MDC-2 compression functions but only hashing one message block with them instead of two. The developers aim for MDC-4 was to offer a higher security margin, when compared to MDC-2, but still being fast enough for practical purposes.
The MDC-2 collision security proof of Steinberger (EUROCRYPT 2007) cannot be directly applied to MDC-4 due to the structural differences. Although sharing many commonalities, our proof for MDC-4 is much shorter and we claim that our presentation is also easier to grasp.
Ewan Fleischmann, Christian Forler, Stefan Lucks

SPN-Hash: Improving the Provable Resistance against Differential Collision Attacks

Collision resistance is a fundamental property required for cryptographic hash functions. One way to ensure collision resistance is to use hash functions based on public key cryptography (PKC) which reduces collision resistance to a hard mathematical problem, but such primitives are usually slow. A more practical approach is to use symmetric-key design techniques which lead to faster schemes, but collision resistance can only be heuristically inferred from the best probability of a single differential characteristic path. We propose a new hash function design with variable hash output sizes of 128, 256, and 512 bits, that reduces this gap. Due to its inherent Substitution-Permutation Network (SPN) structure and JH mode of operation, we are able to compute its differential collision probability using the concept of differentials. Namely, for each possible input differences, we take into account all the differential paths leading to a collision and this enables us to prove that our hash function is secure against a differential collision attack using a single input difference. None of the SHA-3 finalists could prove such a resistance. At the same time, our hash function design is secure against pre-image, second pre-image and rebound attacks, and is faster than PKC-based hashes. Part of our design includes a generalization of the optimal diffusion used in the classical wide-trail SPN construction from Daemen and Rijmen, which leads to near-optimal differential bounds when applied to non-square byte arrays. We also found a novel way to use parallel copies of a serial matrix over the finite field GF(24), so as to create lightweight and secure byte-based diffusion for our design. Overall, we obtain hash functions that are fast in software, very lightweight in hardware (about 4625 GE for the 256-bit hash output) and that provide much stronger security proofs regarding collision resistance than any of the SHA-3 finalists.
Jiali Choy, Huihui Yap, Khoongming Khoo, Jian Guo, Thomas Peyrin, Axel Poschmann, Chik How Tan

Security Analysis and Comparison of the SHA-3 Finalists BLAKE, Grøstl, JH, Keccak, and Skein

In 2007, the US National Institute for Standards and Technology announced a call for the design of a new cryptographic hash algorithm in response to the vulnerabilities identified in widely employed hash functions, such as MD5 and \(\mathrm{SHA\text{-}1}\). NIST received many submissions, 51 of which got accepted to the first round. At present, 5 candidates are left in the third round of the competition. At NIST’s second SHA-3 Candidate Conference 2010, Andreeva et al. provided a provable security classification of the second round SHA-3 candidates in the ideal model. In this work, we revisit this classification for the five SHA-3 finalists. We evaluate recent provable security results on the candidates, and resolve remaining open problems for Grøstl, JH, and Skein.
Elena Andreeva, Bart Mennink, Bart Preneel, Marjan Škrobot

Hash Functions: Design and Implementation

The GLUON Family: A Lightweight Hash Function Family Based on FCSRs

Since the beginning of the SHA3 competition, the cryptographic community has seen the emergence of a new kind of primitives: the lightweight cryptographic hash functions. At the time writing this article, two representatives of this category have been published: Quark [7] and PHOTON [18] designed to match RFID constraints.
In this paper, we propose a third representative of this category which is called GLUON. It is based on the sponge construction model [11] as Quark and PHOTON and inspired by two stream ciphers F-FCSR-v3 [4] and X-FCSR-v2 [10]. From the generic definition of our lightweight hash function, we derive three different instances according to the required security level that must be reached.
For example, our lightest instance (GLUON-128/8) dedicated to 64-bit security level fits in 2071 gate-equivalents which stays competitive when compared with the parallel implementation of U-Quark. The software performances are good for GLUON-224/32, our heaviest instance.
Thierry P. Berger, Joffrey D’Hayer, Kevin Marquet, Marine Minier, Gaël Thomas

SHA-3 on ARM11 Processors

This paper presents high-speed assembly implementations of the 256-bit-output versions of all five SHA-3 finalists and of SHA-256 for the ARM11 family of processors. We report new speed records for all of the six implemented functions. For example our implementation of the round-3 version of JH-256 is 35% faster than the fastest implementation of the round-2 version of JH-256 in eBASH. Scaled with the number of rounds this is more than a 45% improvement. We also improve upon previous assembly implementations for 32-bit ARM processors. For example the implementation of Grøstl-256 described in this paper is about 20% faster than the arm32 implementation in eBASH.
Peter Schwabe, Bo-Yin Yang, Shang-Yi Yang

Algorithms for Public-Key Cryptography

Improved Fixed-Base Comb Method for Fast Scalar Multiplication

Computing elliptic-curve scalar multiplication is the most time consuming operation in any elliptic-curve cryptosystem. In the last decades, it has been shown that pre-computations of elliptic-curve points improve the performance of scalar multiplication especially in cases where the elliptic-curve point P is fixed. In this paper, we present an improved fixed-base comb method for scalar multiplication. In contrast to existing comb methods such as proposed by Lim and Lee or Tsaur and Chou, we make use of a width-ω non-adjacent form representation and restrict the number of rows of the comb to be greater or equal ω. The proposed method shows a significant reduction in the number of required elliptic-curve point addition operation. The computational complexity is reduced by 33 to 38,% compared to Tsaur and Chou method even for devices that have limited resources. Furthermore, we propose a constant-time variation of the method to thwart simple-power analysis attacks.
Nashwa A. F. Mohamed, Mohsin H. A. Hashim, Michael Hutter

Optimal First-Order Masking with Linear and Non-linear Bijections

Hardware devices can be protected against side-channel attacks by introducing one random mask per sensitive variable. The computation throughout is unaltered if the shares (masked variable and mask) are processed concomitantly, in two distinct registers. Nonetheless, this setup can be attacked by a zero-offset second-order CPA attack. The countermeasure can be improved by manipulating the mask through a bijection F, aimed at reducing the dependency between the shares. Thus dth-order zero-offset attacks, that consist in applying CPA on the dth power of the centered side-channel traces, can be thwarted for d ≥ 2 at no extra cost. We denote by n the size in bits of the shares and call F the transformation function, that is a bijection of \(\mathbb{F}_2^n\). In this paper, we explore the functions F that thwart zero-offset HO-CPA of maximal order d. We mathematically demonstrate that optimal choices for F relate to optimal binary codes (in the sense of communication theory). First, we exhibit optimal linear F functions. Second, we note that for values of n for which non-linear codes exist with better parameters than linear ones. These results are exemplified in the case n = 8, the optimal F can be identified:it is derived from the optimal rate 1/2 binary code of size 2n, namely the Nordstrom-Robinson (16, 256, 6) code. This example provides explicitly with the optimal protection that limits to one mask of byte-oriented algorithms such as AES or AES-based SHA-3 candidates. It protects against all zero-offset HO-CPA attacks of order d ≤ 5. Eventually, the countermeasure is shown to be resilient to imperfect leakage models.
Houssem Maghrebi, Claude Carlet, Sylvain Guilley, Jean-Luc Danger

Cryptographic Protocols

Size-Hiding in Private Set Intersection: Existential Results and Constructions

In this paper we focus our attention on private set intersection. We show impossibility and existential results, and we provide some explicit constructions. More precisely, we start by looking at the case in which both parties, client and server, in securely computing the intersection, would like to hide the sizes of their sets of secrets, and we show that:
  • It is impossible to realize an unconditionally secure size-hiding set intersection protocol.
  • In a model where a TTP provides set up information to the two parties and disappears, unconditionally secure size-hiding set intersection is possible.
  • There exist computationally secure size-hiding set intersection protocols.
Then, we provide some explicit constructions for one-sided protocols, where only the client gets the intersection and hides the size of her set of secrets. In the model with the TTP, we design two protocols which are computationally secure under standard assumptions, and two very efficient protocols which are secure in the random oracle model. We close the paper with some remarks and by pointing out several interesting open problems.
Paolo D’Arco, María Isabel González Vasco, Angel L. Pérez del Pozo, Claudio Soriente

Round-Optimal Black-Box Statistically Binding Selective-Opening Secure Commitments

Assuming t-round statistically hiding commitments in the stand-alone model, we build a (t + 2)-round statistically binding commitment secure against selective opening attacks under parallel composition. In particular, assuming collision-resistant hash functions, we build such commitments in 4 rounds.
David Xiao

Invited Talks

Stream Ciphers, a Perspective

Synchronous stream ciphers are commonly used in applications with high throughput requirements or on hardware devices with restricted resources. Well known stream ciphers are A5/1, used in GSM, RC4, used in SSL, or E0 as specified in Bluetooth, but also some block cipher modes of operation. A review of the development of stream ciphers is given which starts with classical designs and is directed to modern dedicated stream ciphers as in the European NoE eSTREAM project. The history of stream ciphers is rich in new proposals followed by devastating breaks, e.g., by statistical or algebraic attacks. Differential cryptanalysis is probably the most popular tool for chosen plaintext attacks on block ciphers. It also applies to the initialization step in stream ciphers, but here, high order differential attacks are shown to be surprisingly successful, namely on constructions based on linear and nonlinear feedback shift registers. The process of designing and cryptanalyzing stream ciphers has not only resulted in a number of building blocks for stream ciphers: Similar components turn out to be useful as well in the design of lightweight block ciphers, hash functions and in algorithms for authenticated encryption.
Willi Meier

Black-Box Reductions and Separations in Cryptography

Cryptographic constructions of one primitive or protocol from another one usually come with a reductionist security proof, in the sense that the reduction turns any adversary breaking the derived scheme into a successful adversary against the underlying scheme. Very often the reduction is black-box in the sense that it only looks at the input/output behavior of the adversary and of the underlying primitive. Here we survey the power and the limitations of such black-box reductions, and take a closer look at the recent method of meta-reductions.
Marc Fischlin


Weitere Informationen

Premium Partner