Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 16th Australasian Conference on Information Security and Privacy, ACISP 2011, held in Melbourne, Australia, in July 2011. The 24 revised full papers presented together with an invited talk and 9 poster papers were carefully reviewed and selected from 103 submissions. The papers are organized in topical sections on symmetric key cryptography, hash functions, cryptographic protocols, access control and security, and public key cryptography.

Inhaltsverzeichnis

Frontmatter

Invited Talks

On Known and New Differentially Uniform Functions

We give a survey on the constructions of APN and differentially 4-uniform functions suitable for designing S-boxes for block ciphers. We recall why the search for more of such functions is necessary. We propose a way of designing functions which can possibly be APN or differentially 4-uniform and be bijective. We illustrate it with an example of a differentially 4-uniform (

n

,

n

)-permutation for

n

odd, based on the power function

x

3

over the second order Galois extension of

${\Bbb F}_{2^{n+1}}$

, and related to the Dickson polynomial

D

3

over this field. These permutations have optimal algebraic degree and their nonlinearity happens to be rather good (but worse than that of the multiplicative inverse functions).

Annovation

Claude Carlet

Symmetric Key Cryptography

New Impossible Differential Attacks of Reduced-Round Camellia-192 and Camellia-256

Camellia, which is a block cipher selected as a standard by ISO/IEC, is one of the most widely used block ciphers. In this paper, we propose several 6-round impossible differentials of Camellia with

FL

/

FL

− 1

layers in the middle of them. With the impossible differentials and a well-organized precomputed table, impossible differential attacks on 10-round Camellia-192 and 11-round Camellia-256 are given, and the time complexities are 2

175.3

and 2

206.8

respectively. In addition, an impossible differential attack on 15-round Camellia-256 without

FL

/

FL

− 1

layers and whitening is also be given, which needs about 2

236.1

encryptions. To the best of our knowledge, these are the best cryptanalytic results of Camellia-192/-256 with

FL

/

FL

− 1

layers and Camellia-256 without

FL

/

FL

− 1

layers to date.

Jiazhe Chen, Keting Jia, Hongbo Yu, Xiaoyun Wang

Results on the Immunity of Boolean Functions against Probabilistic Algebraic Attacks

In this paper, we study the immunity of Boolean functions against probabilistic algebraic attacks. We first show that there are functions, using as filters in a linear feedback shift register based nonlinear filter generator, such that probabilistic algebraic attacks outperform deterministic ones. Then we introduce two notions, algebraic immunity distance and

k

-error algebraic immunity, to measure the ability of Boolean functions resistant to probabilistic algebraic attacks. We analyze both lower and upper bounds on algebraic immunity distance, and also present the relations among algebraic immunity distance,

k

-error algebraic immunity, algebraic immunity and high order nonlinearity.

Meicheng Liu, Dongdai Lin, Dingyi Pei

Finding More Boolean Functions with Maximum Algebraic Immunity Based on Univariate Polynomial Representation

Algebraic immunity is an important cryptographic property for Boolean functions against algebraic attacks. Constructions of Boolean functions with the maximum algebraic immunity (MAI Boolean functions) by using univariate polynomial representation of Boolean functions over finite fields have received more and more attention. In this paper, how to obtain more MAI Boolean functions from a known MAI Boolean function under univariate polynomial representation is further investigated. The sufficient condition of Boolean functions having the maximum algebraic immunity obtained by changing a known MAI Boolean function under univariate polynomial representation is given. With this condition, more balanced MAI Boolean functions under univariate polynomial representation can be obtained. The algebraic degree and the nonlinearity of these Boolean functions are analyzed.

Yusong Du, Fangguo Zhang

Improving the Algorithm 2 in Multidimensional Linear Cryptanalysis

In FSE’09 Hermelin

et al.

introduced the Algorithm 2 of multidimensional linear cryptanalysis. If this algorithm is

m

-dimensional and reveals

l

bits of the last round key with

N

plaintext-ciphertext pairs, then its time complexity is

$\mathcal{O}(mN2^l)$

. In this paper, we show that by applying the

Fast Fourier Transform

and

Fast Walsh Hadamard Transform

to the Algorithm 2 of multidimensional linear cryptanalysis, we can reduce the time complexity of the attack to

$\mathcal{O}(N + \lambda2^{m+l})$

, where

λ

is 3(

m

 + 

l

) or 4

m

 + 3

l

. The resulting attacks are the best known key recovery attacks on 11-round and 12-round

Serpent

. The data, time, and memory complexity of the previously best known attack on 12-round

Serpent

are reduced by factor of 2

7.5

, 2

11.7

, and 2

7.5

, respectively. This paper also simulates the experiments of the improved Algorithm 2 in multidimensional linear cryptanalysis on 5-round

Serpent

.

Phuong Ha Nguyen, Hongjun Wu, Huaxiong Wang

State Convergence in the Initialisation of Stream Ciphers

An initialisation process is a key component in modern stream cipher design. A well-designed initialisation process should ensure that each key-IV pair generates a different keystream. In this paper, we analyse two ciphers, A5/1 and Mixer, for which this does not happen due to state convergence. We show how the state convergence problem occurs and estimate the effective key-space in each case.

Sui-Guan Teo, Ali Al-Hamdan, Harry Bartlett, Leonie Simpson, Kenneth Koon-Ho Wong, Ed Dawson

On Maximum Differential Probability of Generalized Feistel

The maximum differential probability (MDP) is an important security measure for blockciphers. We investigate MDP of Type-2 generalized Feistel structure (Type-2 GFS), one of the most popular cipher architectures. Previously MDP of Type-2 GFS has been studied for partition number (number of sub-blocks)

k

 = 2 by Aoki and Ohta, and

k

 = 4 by Kim et al. These studies are based on ad-hoc case analysis and it seems rather difficult to analyze larger

k

by hand. In this paper, we abstract the idea of previous studies and generalize it for any

k

, and implement it using computers. We investigate Type-2 GFS of

k

 = 4,6,8 and 10 with

k

 + 1 rounds, and obtain

O

(

p

k

) bound for all cases, when the round function is invertible and its MDP is

p

. The bound for

k

 = 4 is improved from Kim et al. and those for larger

k

are new. We also investigate an improvement of Type-2 GFS proposed by Suzaki and Minematsu, and obtain similar bounds as Type-2.

Kazuhiko Minematsu, Tomoyasu Suzaki, Maki Shigeri

Double SP-Functions: Enhanced Generalized Feistel Networks

Extended Abstract

This work deals with the security and efficiency of type-I and type-II generalized Feistel networks (GFNs) with 4 lines. We propose to instantiate the GFNs with double SP-functions (substitution-permutation layer followed by another substitution-permutation layer) instead of single SP-functions (one substitution-permutation layer). We provide tight lower bounds on the number of differentially and linearly active functions and S-boxes in such ciphers. Based on these bounds, we show that the instantiation with double SP-functions using MDS diffusion has a proportion of differentially and linearly active S-boxes by up to 33% and 50% higher than that with single SP-functions for type-I and type-II GFNs, respectively. This opens up the possibility of designing more efficient block ciphers based on GFN structure. Note that type-I and type-II GFNs are the only non-contracting GFNs with 4 lines under a reasonable definition of a GFN.

Andrey Bogdanov, Kyoji Shibutani

Algebraic Techniques in Differential Cryptanalysis Revisited

At FSE 2009, Albrecht

et al.

proposed a new cryptanalytic method that combines algebraic and differential cryptanalysis. They introduced three new attacks, namely Attack A, Attack B and Attack C. For Attack A, they explain that the time complexity is difficult to determine. The goal of Attacks B and C is to filter out wrong pairs and then recover the key. In this paper, we show that Attack C does not provide an advantage over differential cryptanalysis for typical block ciphers, because it cannot be used to filter out any wrong pairs that satisfy the ciphertext differences. Furthermore, we explain why Attack B provides no advantage over differential cryptanalysis for PRESENT. We verify our results for PRESENT experimentally, using both PolyBoRi and MiniSat. Our work helps to understand which equations are important in the differential-algebraic attack. Based on our findings, we present two new differential-algebraic attacks. Using the first method, our attack on 15-round PRESENT-80 requires 2

59

chosen plaintexts and has a worst-case time complexity of 2

73.79

equivalent encryptions. Our new attack on 14-round PRESENT-128 requires 2

55

chosen plaintexts and has a worst-case time complexity of 2

112.83

equivalent encryptions. Although these attacks have a higher time complexity than the differential attacks, their data complexity is lower.

Meiqin Wang, Yue Sun, Nicky Mouha, Bart Preneel

Hash Functions

Faster and Smoother – VSH Revisited

We reconsider the provably collision resistant Very Smooth Hash and propose a small change in the design aiming to improve both performance and security. While the original proofs of security based on hardness of factoring or discrete logarithms are preserved, we can base the security on the

k

-sum problem studied by Wagner and more recently by Minder & Sinclair. The new approach allows to output shorter digests and brings the speed of Fast VSH closer to the range of “classical” hash functions. The modified VSH is likely to remain secure even if factoring and discrete logarithms are easy, while this would have a devastating effect on the original versions. This observation leads us to propose a variant that operates modulo a power of two to increase the speed even more. A function that offers an equivalent of 128-bit collision resistance runs at 68.5 MB/s on a 2.4 GHz Intel Core 2 CPU, more than a third of the speed of SHA-256.

Juraj Šarinay

Cryptanalysis of the Compression Function of SIMD

SIMD is one of the second round candidates of the SHA-3 competition hosted by NIST. In this paper, we present the first attack for the compression function of the reduced SIMD-256 and the full SIMD-512 (the tweaked version) using the modular difference method. For SIMD-256, we give a free-start near collision attack on the compression function reduced to 20 steps with complexity 2

116

. And for SIMD-512, we give a free-start near collision attack on the 24-step compression function with complexity 2

235

. Furthermore, we give a distinguisher attack for the full compression function of SIMD-512 with complexity 2

475

. Our attacks are also applicable for the final compression function of SIMD.

Hongbo Yu, Xiaoyun Wang

Protocols

Electronic Cash with Anonymous User Suspension

Electronic cash (E-cash) is the digital counterpart of cash payment. They allow users to spend anonymously unless they “double spend” their electronic coins. However, it is not possible to prevent users from misbehaving under some other subjective definitions of misbehavior, such as money laundering. One solution is to incorporate a trusted third party (TTP), which, upon complaint, uses its power to deanonymize the suspected user. This solution, known as fair e-cash, is not fully satisfactory since additional measure has to be taken to stop misbehaving users from further abusing the system after they have been identified. We present a e-cash system with anonymous user suspension,

EC-AUS

, which features an suspension manager (

SM

) that is capable of suspending the underlying user that participates in any suspicious transaction. Suspended users cannot participate in any transaction. The suspension is anonymous in the sense that no party, not even

SM

, can tell the identities of the suspended users nor link their past transactions. If they are found innocent later, their suspension can be revoked easily.

Man Ho Au, Willy Susilo, Yi Mu

T-Robust Scalable Group Key Exchange Protocol with O(logn) Complexity

Group key exchange (GKE) allows a large group of

n

parties to share a common secret key over insecure channels. The goal of this paper is to present

T

-robust scalable GKE with communicational and computational complexity

O

(log

n

) for the size of

n

parties. As a result, our GKE not only has a resistance to party failures resulting from party crashes, run-down batteries, and network failures, but also satisfies scalability: each party does not need to have the same environment such as computational resources, batteries, etc. The previous schemes in this area focus on Burmester-Desmedt GKE with complexity

O

(

n

) (BDI) and without scalability. As a result, the previous robust GKEs, proposed by Jarecki, Kim and Tsudik (JKT), need computational complexity

O

(

n

) without scalability although it allows any

T

-party fault in any position.

We, by focusing the well-known Burmester-Desmedt GKE with complexity

O

(log

n

) (BDII), propose a new robust GKE with scalability, called CH-GKE. CH-GKE can

reduce the communicational and computational complexity and allow parties be in different environments

. Then, we extend CH-GKE to increase the number of faults and present

T

-robust scalable efficient GKE by a novel combination of CH-GKE and JKT. Our

T

-robust scalable GKE can work in flexible settings between fault tolerance and efficiency, such as communicational and computational complexity.

Tetsuya Hatano, Atsuko Miyaji, Takashi Sato

Application-Binding Protocol in the User Centric Smart Card Ownership Model

The control of the application choice is delegated to the smart card users in the User Centric Smart Card Ownership Model (UCOM). There is no centralised authority that controls the card environment, and it is difficult to have implicit trust on applications installed on a smart card. The application sharing mechanism in smart cards facilitates corroborative and interrelated applications to co-exist and augment each other’s functionality. The already established application sharing mechanisms (e.g. in Java Card and Multos) do not fully satisfy the security requirements of the UCOM that require a security framework that provides runtime authentication, and verification of an application. Such a framework is the focus of this paper. To support the framework, we propose a protocol that is verified using CasperFDR. In addition, we implemented the protocol and provide a performance comparison with existing protocols.

Raja Naeem Akram, Konstantinos Markantonakis, Keith Mayes

Access Control and Security

Security in Depth through Smart Space Cascades

Security in depth relies on controlled access across a layering of protective barriers. We introduce

smart space cascades

, a framework in which access control is applied to a hierarchy of smart spaces, as a way of achieving security in depth in the context of highly automated work environments.

Benjamin W. Long

GeoEnc: Geometric Area Based Keys and Policies in Functional Encryption Systems

Functional encryption provides more sophisticated and flexible expression between the encryption key

ek

and decryption key

dk

by deriving from attribute vectors

$\overrightarrow{x}$

and policy vector

$\overrightarrow{v}$

, respectively. There is a function

$f({\overrightarrow{x}},\overrightarrow{v})$

that determines what type of a user with a secret key

dk

can decrypt the ciphertext encrypted under

ek

. This allows an encryptor to specify a functional formula as a decryptable policy describing what users can learn from the ciphertext without knowing the decryptor’s identities or public keys.

In this paper, we explore two geometric-area-based key generation and functional encryption schemes (GeoEnc), where secret keys are associated with a point on a planar coordinate system and encrypt policies are associated with a line (

GeoEncLine

scheme) or a convex polygon (

GeoEncHull

scheme). If the attribute point lies on the line or inside the convex hull, the decryption key holder can decrypt the ciphertext associated with the geometric policy such as the line or the convex polygon. The proposed schemes have

policy hiding

as well as

payload hiding

characteristics. To the best of our knowledge, they are the first functional encryptions using geometric-area-based keys and policies. We give an evaluation of key distribution in a practical coordinate system and also give a security analysis with a hybrid model. The proposed schemes have many applications as sources for keys generation and policies encryption such as computer graphics security, network topology protection, secure routing and mobile networking, secure multiparty computation, secure GPS/GIS, military area protection, etc.

Mingwu Zhang, Tsuyoshi Takagi

An Efficient Rational Secret Sharing Scheme Based on the Chinese Remainder Theorem

The design of rational cryptographic protocols is a recently created research area at the intersection of cryptography and game theory. At TCC’10, Fuchsbauer

et al.

introduced two equilibrium notions (computational version of strict Nash equilibrium and stability with respect to trembles) offering a computational relaxation of traditional game theory equilibria. Using trapdoor permutations, they constructed a rational

t

-out-of

n

sharing technique satisfying these new security models. Their construction only requires standard communication networks but the share bitsize is 2

n

|

s

| + 

O

(

k

) for security against a single deviation and raises to (

n

 − 

t

 + 1)·(2

n

|

s

| + 

O

(

k

)) to achieve (

t

 − 1)-resilience where

k

is a security parameter. In this paper, we propose a new protocol for rational

t

-out-of

n

secret sharing scheme based on the Chinese reminder theorem. Under some computational assumptions related to the discrete logarithm problem and RSA, this construction leads to a (

t

 − 1)-resilient computational strict Nash equilibrium that is stable with respect to trembles with share bitsize

O

(

k

). Our protocol does not rely on simultaneous channel. Instead, it only requires synchronous broadcast channel and synchronous pairwise private channels.

Yun Zhang, Christophe Tartary, Huaxiong Wang

DMIPS - Defensive Mechanism against IP Spoofing

The usage of internet has increased in all fields of the globe and its size is increasing at a high rate. The network providers are not able to afford enough resources like computation power and bandwidth which are needed to maintain their quality of service. This inability is exploited by the attackers in the form of Denial of Service attacks (DoS) and Distributed Denial of Service attacks (DDoS). The systems trying to mitigate DoS attacks should focus on the technique called IP spoofing. IP Spoofing refers to the creation of IP packets with forged source address. IP spoofing aids the DoS attackers in maintaining their anonymity. IP spoofing is beneficial when the systems use source address for authentication of the packets. Previously, an anti-spoofing method called HCF (Hop Count Filtering) was proposed which could effectively filter the spoofed packets. The HCF works on the basis that the attacker cannot falsify the Hop count (HC), the number of hops an IP packet takes to reach the destination. This HC value can be inferred from the TTL (Time To Live) field in the IP packet. However, the working of HCF has the following problems: 1) Multiple path possibility is ignored. 2) The method of building the HC tables must be more secure. 3) Lack of good renew procedure which can detect network changes. In this paper, we propose a 2 level filtering scheme called DMIPS, based on HCF. DMIPS is secure, resolves the multiple path problem and can filter the spoofed packets effectively. The present scheme can detect the changes in the network and can update the HC values. DMIPS improve the quality of service of the network by minimizing the number of false positives. The network under discussion is of the type server and clients and the server is the point of attack.

Shashank Lagishetty, Pruthvi Sabbu, Kannan Srinathan

Public Key Cryptography

Provably Secure Key Assignment Schemes from Factoring

We provide constructions for key assignment schemes that are provably secure under the factoring assumption in the standard model. Our first construction is for simple “chain” hierarchies, and achieves security against key recovery attacks with a tight reduction from the problem of factoring integers of a special form. Our second construction applies for general hierarchies, achieves the stronger notion of key indistinguishability, and has security based on the hardness of factoring Blum integers. We compare our constructions to previous schemes, in terms of security and efficiency.

Eduarda S. V. Freire, Kenneth G. Paterson

Efficient CCA-Secure CDH Based KEM Balanced between Ciphertext and Key

In this paper we construct an efficient CCA-secure key encapsulation scheme in the standard model. The new scheme is based on the computational Diffie-Hellman assumption and the twinning technique, which has been widely discussed in recent years. Compared with previous schemes of the same kind, the new scheme is more generic, and offers a simple approach for reconciling ciphertext length and key size by altering a parameter. Choosing a reasonable value for the parameter, a balance between the ciphertext length and key size could be achieved.

Yamin Liu, Bao Li, Xianhui Lu, Dingding Jia

Generic Construction of Strongly Secure Timed-Release Public-Key Encryption

This paper provides a sufficient condition to construct timed-release public-key encryption (TRPKE), where the constructed TRPKE scheme guarantees strong security against malicious time servers, proposed by Chow et al., and strong security against malicious receivers, defined by Cathalo et al., in the random oracle model if the component IBE scheme is

IND-ID-CPA

secure, the component PKE scheme is

IND-ID-CPA

secure, and the PKE scheme satisfies negligible

γ

-uniformity for every public key. Chow et al. proposed a strongly secure TRPKE scheme, which is concrete in the standard model. To the best of our knowledge, the proposed construction is the first generic one for TRPKE that guarantees strong security even in the random oracle model.

Atsushi Fujioka, Yoshiaki Okamoto, Taiichi Saito

Identity-Based Server-Aided Decryption

Identity-Based Cryptosystem plays an important role in the modern cryptography world, due to the elimination of the costly certificate. However, all practical identity-based encryption schemes require pairing operation in the decryption stage. Pairing is a heavy mathematical algorithm, especially for resource-constrained devices such as smart cards or wireless sensors. In other words, decryption can hardly be done in these devices if identity-based cryptosystem is employed. We solve this problem by proposing a new notion called

Identity-Based Server-Aided Decryption

. It is similar to normal identity-based encryption scheme, but it further enables the receiver to decrypt the ciphertext without needing to compute pairing with the assistance of an external server. Secure mechanisms are provided to detect whether the server has computed correctly and prevent the server from getting any information about the plaintext or the user secret key. We give two concrete instantiations of this notion.

Joseph K. Liu, Cheng Kang Chu, Jianying Zhou

A Generic Variant of NIST’s KAS2 Key Agreement Protocol

We propose a generic three-pass key agreement protocol that is based on a certain kind of trapdoor one-way function family. When specialized to the RSA setting, the generic protocol yields the so-called KAS2 scheme that has recently been standardized by NIST. On the other hand, when specialized to the discrete log setting, we obtain a new protocol which we call DH2. An interesting feature of DH2 is that parties can use different groups (e.g., different elliptic curves). The generic protocol also has a hybrid implementation, where one party has an RSA key pair and the other party has a discrete log key pair. The security of KAS2 and DH2 is analyzed in an appropriate modification of the extended Canetti-Krawczyk security model.

Sanjit Chatterjee, Alfred Menezes, Berkant Ustaoglu

A Single Key Pair is Adequate for the Zheng Signcryption

We prove that the original Zheng signcryption scheme published at Crypto’97, with a couple of minor tweaks, requires only a single public/private key pair for each user. That is the user can employ the same public/private key pair for both signcryption and unsigncryption in a provably secure manner. We also prove that the Zheng signcryption scheme allows a user to securely signcrypt a message to himself. Our first result confirms a long-held belief that signcryption reduces the overhead associated with public keys, while our second result foretells potential applications in cloud storage where one with a relatively less resourceful storage device may wish to off-load data to an untrusted remote storage network in a secure and unforgeable way.

Jia Fan, Yuliang Zheng, Xiaohu Tang

Towards Public Key Encryption Scheme Supporting Equality Test with Fine-Grained Authorization

In this paper we investigate a new category of public key encryption schemes which supports equality test between ciphertexts. With this primitive, two users, who possess their own public/private key pairs, can issue token(s) to a proxy to authorize it to perform equality test between their ciphertexts. We provide a formulation and a corresponding construction for this primitive, and our formulation provides fine-grained authorization policy enforcements for users. With the increasing popularity of outsourcing data and computations to third-party service providers, this primitive will be an important building block in designing privacy protection solutions supporting operations on encrypted data.

Qiang Tang

Posters

Lattice-Based Completely Non-malleable PKE in the Standard Model (Poster)

This paper presents ongoing work toward constructing efficient completely non-malleable public-key encryption scheme based on lattices in the standard (common reference string) model. An encryption scheme is completely non-malleable if it requires attackers to have negligible advantage, even if they are allowed to transform the public key under which the related message is encrypted. Ventre and Visconti proposed two

inefficient

constructions of completely non-malleable schemes, one in the common reference string model using non-interactive zero-knowledge proofs, and another using interactive encryption schemes. Recently, two efficient public-key encryption schemes have been proposed, both of them are based on pairing identity-based encryption.

Reza Sepahi, Ron Steinfeld, Josef Pieprzyk

Compliance or Security, What Cost? (Poster)

This paper presents ongoing work toward measuring the effectiveness of audit and assessment as an information security control. The trend towards the application of security control measures which are employed to demonstrate compliance with legislation or regulations, rather than to actually detect or prevent breaches occurring is demonstrated to result in a misallocation of funds. Information security is a risk function. Paying for too much security can be more damaging in economic terms than not buying enough. This research reveals several major misconceptions among businesses about what security really means and that compliance is pursued to the detriment of security. In this paper, we look at some of the causes of compliance based audit failures and why these occur. It is easier to measure compliance than it is to measure security and spending money to demonstrate compliance does not in itself provide security. When the money spent on achieving compliance reduces the funding available for control measures that may actually improve security problems may arise.

Craig Wright

Preimage Attacks on Full-ARIRANG (Poster)

This paper presents ongoing work toward the first preimage attacks on hash function ARIRANG, which is one of the first round candidates in the SHA-3 competition. ARIRANG has an unique design where the feed-forward operation is computed not only after the last step but also in a middle step. In fact, this design prevents previous preimage attacks. We apply a meet-in-the-middle preimage attacks to ARIRANG. Specifically, we propose a new initial-structure optimized for ARIRANG and overcome the middle feed-forward.

Chiaki Ohtahara, Keita Okada, Yu Sasaki, Takeshi Shimoyama

Finding Collisions for Reduced Luffa-256 v2 (Poster)

This paper presents ongoing work toward analysis of a second round SHA-3 candidate

Luffa

. This article analyses the collision resistance of reduced-round versions of

Luffa

-256 v2 which is the 256-bit hash function in the

Luffa

family. This paper focuses on the hash function security. To the best of our knowledge, this is the first collision analysis for fixed initial vector of

Luffa

. We show that collisions for 4 out of 8 steps of

Luffa

-256 v2 can be found with complexity 2

90

using sophisticated message modification techniques.

Bart Preneel, Hirotaka Yoshida, Dai Watanabe

Improved Security Analysis of Fugue-256 (Poster)

We present some improved analytical results as part of the ongoing work on the analysis of Fugue-256 hash function, a second round candidate in the NIST’s SHA3 competition. First we improve Aumasson and Phans’ integral distinguisher on the 5.5 rounds of the

final transformation

of Fugue-256 to 16.5 rounds. Next we improve the designers’ meet-in-the-middle preimage attack on Fugue-256 from 2

480

time and memory to 2

416

. Finally, we comment on possible methods to obtain free-start distinguishers and free-start collisions for Fugue-256.

Praveen Gauravaram, Lars R. Knudsen, Nasour Bagheri, Lei Wei

Improved Meet-in-the-Middle Cryptanalysis of KTANTAN (Poster)

This paper presents ongoing work towards extensions of meet-in-the-middle (MITM) attacks on block ciphers. Exploring developments in MITM attacks in hash analysis such as: (i) the

splice-and-cut

technique; (ii) the

indirect-partial-matching

technique. Our first contribution is that we show corrections to previous cryptanalysis and point out that the key schedule is more vulnerable to MITM attacks than previously reported. Secondly we further improve the time complexities of previous attacks with (i) and (ii), now the 80-bit secret key of the full rounds KTANTAN-{32,48,64} can be recovered at time complexity of 2

72.9

, 2

73.8

and 2

74.4

respectively, each requiring 4 chosen-plaintexts.

Lei Wei, Christian Rechberger, Jian Guo, Hongjun Wu, Huaxiong Wang, San Ling

Toward Dynamic Attribute-Based Signcryption (Poster)

This paper presents an ongoing work toward the proposal of the new concept of the attribute-based cryptosystem. In SCN2010, Gagné, Narayan, and Safavi-Naini proposed attribute-based signcryption (ABSC) with threshold structure. As in ciphertext-policy attribute-based encryption (CP-ABE), an encryptor can specify the access structure of decryptors, and as in attribute-based signature (ABS), each decryptor can verify the encryptor’s attributes. On the contrary to the access structure of decryptors, the access structure of the encryptor needs to be fixed in the setup phase. In this paper, we investigate ABSC with dynamic property, called dynamic ABSC (DABSC), where access structures of encryptor can be updated flexibly without re-issuing secret keys of users.

Keita Emura, Atsuko Miyaji, Mohammad Shahriar Rahman

A Verifiable Distributed Oblivious Transfer Protocol

In the various distributed oblivious transfer (DOT) protocols designed in an unconditionally secure environment, a receiver contacts

k

out of

m

servers to obtain one of the

n

secrets held by a sender. After a protocol has been executed, the sender has no information on the choice of the receiver and the receiver has no information on the secrets she did not obtain.

These protocols are based on a semi-honest model: no mechanism prevents a group of malicious servers from disrupting the protocol such that the secret obtained by the receiver does not correspond to the chosen secret.

This paper presents ongoing work towards the definition of the first unconditionally secure verifiable DOT protocol in the presence of an active adversary who may corrupt up to

k

 − 1 servers. In addition to the active adversary, we also assume that the sender may (passively) corrupt up to

k

 − 1 servers to learn the choice of the receiver. Similarly, the receiver may (passively) corrupt up to

k

 − 1 servers to learn more than the chosen secret.

Our DOT protocol allows the receiver to contact 4

k

 − 3 servers to obtain one secret, while the required security is maintained.

Christian L. F. Corniaux, Hossein Ghodosi

Impracticality of Efficient PVSS in Real Life Security Standard (Poster)

This paper presents ongoing work toward employment of RSA encryption in PVSS. Two PVSS schemes are shown to be efficient only when very small RSA public keys like 3 are employed to encrypt the shares. However, too small RSA public keys like 3 are insecure in the PVSS schemes as they cannot apply padding to the encrypted messages. When practical larger RSA public keys are employed, the two PVSS schemes have to process extremely large integers and become intolerably inefficient.

Kun Peng

Electromagnetic Analysis Enhancement with Signal Processing Techniques (Poster)

This paper presents ongoing work toward Electromagnetic Analysis (EMA) with signal processing techniques. Electromagnetic emission leaks confidential data of cryptographic devices. EMA exploits such information and reveals secret keys. It has been actively studied. We present three signal processing techniques: bandpass filtering, signal companding and independent component analysis (ICA), and apply them to EMA. The effectiveness is demonstrated through the analyses of encryption algorithms on synthesized application-specific integrated circuit (ASIC). Experiments show that the performance of EMA is greatly enhanced. The number of signals needed to reveal keys has been dramatically reduced.

Hongying Liu, Yukiyasu Tsunoo, Satoshi Goto

Erratum: Compliance or Security, What Cost? (Poster)

In the original version the author affiliation was wrongly added in this paper. It should read as: Charles Sturt University, Australia

Craig Wright

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise