Skip to main content

1993 | Buch

Advances in Cryptology — AUSCRYPT '92

Workshop on the Theory and Application of Cryptographic Techniques Gold Coast, Queensland, Australia, December 13–16, 1992 Proceedings

herausgegeben von: Jennifer Seberry, Yuliang Zheng

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book contains the proceedings of AUSCRYPT '92, an international conference on cryptologic research held on the Gold Coast, Australia, in December 1992. This is the third conference held outside the series of CRYPTO meetings held in Santa Barbara, California, each August and EUROCRYPT meetings held in European countries each northern spring. The first two were AUSCRYPT '90, held in Australia, and ASIACRYPT '91, held in Japan. The volume contains three invited papers and 44 contributed papers selected from 77 submissions. The articles cover all main topics in modern computer and communications security research.These include: - authentication - secret sharing - digital signatures - one-way hashing functions - design of block ciphers - cryptanalysis - cryptographic protocols - pseudo-random sequences and functions - public key cryptography.

Inhaltsverzeichnis

Frontmatter
Threshold cryptosystems

Often the power to use a cryptosystem has to be shared. In threshold schemes, t-out-of-l have the power to regenerate a secret key (while less than t have not). However threshold schemes cannot be used directly in many applications, such as threshold signatures in which t-out-of-l have to co-sign a message. A normal threshold scheme would require the shareholders to send their shares to a trusted person who would sign for them. But the use of such a trusted person violates the main point of threshold signatures!We first overview the research in the field and then discuss a threshold decryption/signature scheme which is as secure as RSA. We conclude by giving a list of open problems.

Yvo Desmedt
Authentication codes with perfect protection

In this paper we prove new results on authentication codes with perfect protection. We will prove that perfect protection for impersonation follows from perfect protection for substitution only if the source is uniform and derive a necessary and sufficient condition for codes that provide perfect protection for both types of attack. We prove a new lower bound on the probability of deception in substitution for codes with perfect protection and characterize the codes that satisfy the bound with equality.

L. Tombak, R. Safavi-Naini
Practical proven secure authentication with arbitration

Proven secure signature schemes and unconditionally secure authentication schemes with arbiter have been proposed. The former are not practical (too slow) and the latter cannot be reused. All these limitations are solved in this paper by presenting a resuable conditionally secure authentication scheme with arbiter. The scheme is unconditionally secure against denial by the sender of having sent a message (which signatures do not have) and conditionally secure against a receiver impersonating the sender or substituting a message and conditionally secure against a similar fraud by the arbiter.

Yvo Desmedt, Jennifer Seberry
Authentication codes under impersonation attack

Performance of authentication codes under impersonation attack is considered. We assume two classes of strategies for the enemy and give expressions for the probability of success in each case and study the relationship of the two classes. We prove a new lower bound on the probability of deception which points out the importance of the average-distance between the encoding rules of the code in the protection provided by it. Codes with perfect protection for each class are characterized and some constructions based on error correcting codes are proposed.

R. Safavi-Naini, L. Tombak
Cumulative arrays and geometric secret sharing schemes

Cumulative secret sharing schemes were introduced by Simmons et al (1991) based on the generalised secret sharing scheme of Ito et al (1987). A given monotone access structure together with a security level is associated with a unique cumulative scheme. Geometric secret sharing schemes form a wide class of secret sharing schemes which have many desirable properties including good information rates. We show that every non-degenerate geometric secret sharing scheme is ‘contained’ in the corresponding cumulative scheme. As there is no known practical algorithm for constructing efficient secret sharing schemes, the significance of this result is that, at least theoretically, a geometric scheme can be constructed from the corresponding cumulative scheme.

Wen-Ai Jackson, Keith M. Martin
Nonperfect secret sharing schemes

A nonperfect secret sharing scheme (NSS) consists of a family of access subsets Γ1, a family of semi-access subsets Γ2 and a family of non-access subsets Γ3. In an NSS, it is possible that ¦Vi¦<¦S¦, where ¦Vi¦ is the size of the share and ¦S¦ is the size of the secret. This paper characterizes nonperfect secret sharing schemes. First, we show that (Γ1, Γ2, Γ3) is realizable if and only if Γ1 is monotone and Γ1 ∪ Γ2 is monotone. Then, we derive a lower bound of ¦Vi¦ in terms of a distance between Γ1 and Γ3. Finally, we show a condition for (Γ1, Γ2, Γ3) to achieve ¦V i ¦=¦S¦/2 for all i.

Wakaha Ogata, Kaoru Kurosawa, Shigeo Tsujii
A construction of practical secret sharing schemes using linear block codes

In this paper we address the problem of constructing secret sharing schemes for general access structures. The construction is inspired by linear block codes. Already in the beginning of the eighties constructions of threshold schemes using linear block codes were presented in [6] and [7]. In this paper we generalize those results to construct secret sharing schemes for arbitrary access structure. We also present a solution to the problem of retrieving the secret.

Michael Bertilsson, Ingemar Ingemarsson
HAVAL — A one-way hashing algorithm with variable length of output (extended abstract)

A one-way hashing algorithm is a deterministic algorithm that compresses an arbitrary long message into a value of specified length. The output value represents the fingerprint or digest of the message. A cryptographically useful property of a one-way hashing algorithm is that it is infeasible to find two distinct messages that have the same fingerprint. This paper proposes a one-way hashing algorithm called HAVAL. HAVAL compresses a message of arbitrary length into a fingerprint of 128, 160, 192, 224 or 256 bits. In addition, HAVAL has a parameter that controls the number of passes a message block (of 1024 bits) is processed. A message block can be processed in 3, 4 or 5 passes. By combining output length with pass, we can provide fifteen (15) choices for practical applications where different levels of security are required. The algorithm is very efficient and particularly suited for 32-bit computers which predominate the current workstation market. Experiments show that HAVAL is 60% faster than MD5 when 3 passes are required, 15% faster than MD5 when 4 passes are required, and as fast as MD5 when full 5 passes are required. It is conjectured that finding two collision messages requires the order of 2n/2 operations, where n is the number of bits in a fingerprint.

Yuliang Zheng, Josef Pieprzyk, Jennifer Seberry
On the power of memory in the design of collision resistant hash functions

Collision resistant hash functions are an important basic tool for cryptographic applications such as digital signature schemes and integrity protection based on “fingerprinting”. This paper proposes a new efficient class of hash functions based on a block cipher that allows for a tradeoff between security and speed. The principles behind the scheme can be used to optimize similar proposals.

Bart Preneel, René Govaerts, Joos Vandewalle
A practical digital multisignature scheme based on discrete logarithms (extended abstract)

This paper proposes a practical digital multisignature scheme based on the C sig * cryptosystem derived from the C sig cryptosystem of Zheng and Seberry (1993). The simple scheme consists of three phases. In the first phase the issuer of the document prepares the document, the list of prospective signatories and a pad on which signatories are to write their signatures. In the second phase each signatory verifies the document, signs it and forwards it to the next signatory. In the third phase a trusted verifier or notary decides on the validity of the signatures. The scheme prevents cheating by dishonest signatories from going undetected. The scheme is practical and offers at least the same security level afforded by its underlying cryptosystem against external attacks. Internal attacks in the form of forging or cheating by a dishonest issuer or by one or more of the signatories (alone or by collaboration) requires the solving of instances of the discrete logarithm problem.

Thomas Hardjono, Yuliang Zheng
Group-oriented undeniable signature schemes without the assistance of a mutually trusted party

In a group-oriented (t, n) undeniable signature scheme, where t is the threshold value and n is the total number of group members, each group, instead of each individual member within the group, publishes a single group public key. During an initial “commitment phase”, at least t group members work together to sign a message. In a “verification phase”, all signers work together to prove the validity of the signature to an outsider. There is only a single group public key required for an outsider to verify the group signature. In case some group members disavow the signature, a judge can resolve the dispute between the group and a verifier. The group-oriented (t,n) undeniable signature scheme has the following four properties: (1) the group signature is mutually generated by at least t group members; (2) the signature verification process is simplified because there is only one group public key required; (3) the signature can only be verified with the consent of all signers; (4) the signers hold the responsibility to the signed messages. More specifically, we propose two schemes where the threshold value t can be either n or 1. In other words, it requires either all group members or a single member to sign a group signature. Moreover, these two schemes do not require the assistance of a mutually trusted party. Each member selects his own secret and the group public key is determined by all group members.

Lein Harn, Shoubao Yang
Highly nonlinear 0–1 balanced boolean functions satisfying strict avalanche criterion (extended abstract)

Nonlinearity, 0–1 balancedness and strict avalanche criterion (SAC) are important criteria for cryptographic functions. Bent functions have maximum nonlinearity and satisfy SAC however they are not 0–1 balanced and hence cannot be directly used in many cryptosystems where 0–1 balancedness is needed. In this paper we construct(i)0–1 balanced boolean functions on V2k+1 (k≥1) having nonlinearity 22k−2k and satisfying SAC,(ii)0–1 balanced boolean functions on V2k (k≥2) having nonlinearity 22k−1−2k and satisfying SAC.We demonstrate that the above nonlinearities are very high not only for the 0–1 balanced functions satisfying SAC but also for all 0–1 balanced functions.

Jennifer Seberry, Xian-Mo Zhang
Linear nonequivalence versus nonlinearity

The choice of a collection of cryptographically strong Boolean functions is crucial in designing a strong hashing algorithm. The paper shows that it is possible to obtain five linearly nonequivalent functions with five Boolean variables which are cryptographically strong and easy to implement. They can be readily used to design hashing algorithms (of the MD5 structure).

Chris Charnes, Josef Pieprzyk
Constructing large cryptographically strong S-boxes

While there is evidence that large substitution boxes (S-boxes) have better cryptographic properties than small S-boxes, they are much harder to design. The difficulty arises from the relative scarcity of suitable boolean functions as the size of the S-box increases. We describe the construction of cryptographically strong 5×5 S-boxes using near-bent boolean functions of five variables. These functions, where the number of variables is odd, possess highly desirable cryptographic properties and can be generated easily and systematically. Moreover, the S-boxes they compose are shown to satisfy all the important design criteria. Further, we feel that it is possible to generalize near-bent functions to any odd number of variables, thereby making construction of yet larger S-boxes feasible.

John Detombe, Stafford Tavares
Nonasymptotic estimates of information protection efficiency for the wire-tap channel concept

In this paper we have been considered the construction of code noising for the general case of the noising main channel and for codes with finite block length. It can be proved that such a construction is equivalent to the combined use of ordinary error correction codes and codes noising for a noiseless main channel. Therefore the complexity of code noising in the general case is at most that for ordinary error correcting codes.Simultaneously we have shown that real codes with a moderate block length give significant information protection and the losses in energy in the asymptotic case are not so very large. We emphasized that for information transmission protection for key exchanges it is a more suitable criterion to consider the real transmitted message assuming a value in the list of given size which was found during the optimal list decoding than theoretical information criteria. The examples, which are given in the paper confirm that code noising has good prospects for practical applications.

Valery Korzhik, Victor Yakovlev
Cryptanalysis of LOKI 91

In this paper we examine the redesign of LOKI, LOKI 91 proposed in [5]. First it is shown that there is no characteristic with a probability high enough to do a successful differential attack on LOKI 91. Secondly we show that the size of the image of the F-function in LOKI 91 is 8/13×232. Finally we introduce a chosen plaintext attack that reduces an exhaustive key search on LOKI 91 by almost a factor 4 using 233+2 chosen plaintexts.

Lars Ramkilde Knudsen
Cryptanalysis of summation generator

In this paper two known plaintext attacks on the summation generator will be described. The first attack is a new method of attack while the second method is due to Meier and Staffelbach. These two methods will be compared and contrasted.

Ed Dawson
Secure addition sequence and its applications on the server-aided secret computation protocols

Recently, researchers consider an approach called the Server Aided Secret Computation (SASC) protocol by using a powerful untrusted auxiliary device to help a smart card for computing a secret function efficiently. However, the computation of their protocol possesses some redundancy. In this paper, we give a new concept called the Secure Addition Sequence and develop an efficient algorithm to construct the Secure Addition Sequence. Based upon the concept of Secure Addition Sequence, performance of the SASC protocol can be enhanced.

Chi-Sung Laih, Sung-Ming Yen
Subliminal channels for signature transfer and their application to signature distribution schemes

In this paper, we consider the subliminal channel, hidden in an identification scheme, for signature transfer. We point out that the direct parallelization of the Fiat-Shamir identification scheme has a subliminal channel for the transmission of the digital signature, which does not exist in the serial (zero-knowledge) version. We apply this subliminal channel to a multi-verifier interactive protocol and propose a distributed verification signature that cannot be verified without all verifiers' corporation. Our proposed protocol is the first implementation of the distributed verification signature without secure channels, and the basic idea of our construction suggests the novel primitive with which a signature transfer secure against adversary can be constructed using only one-way function (without trapdoor).

Kouichi Sakurai, Toshiya Itoh
A practical secret voting scheme for large scale elections

This paper proposes a practical secret voting scheme for large scale elections. The participants of the scheme are voters, an administrator, and a counter. The scheme ensures the privacy of the voters even if both the administrator and the counter conspire, and realizes voting fairness, i.e., no one can know even intermediate result of the voting. Furthermore fraud by either the voter or the administrator is prohibited.

Atsushi Fujioka, Tatsuaki Okamoto, Kazuo Ohta
Privacy for multi-party protocols

We initiate information theoretic investigation of the amount of leaked information in multi-party protocols. It is shown that mutual information is a satisfactory measure of the amount of the leaked information. For more than two parties, we apply multi-terminal information theory and give a formulation of the leaked information.

Takashi Satoh, Kaoru Kurosawa, Shigeo Tsujii
New protocols for electronic money

This paper proposes 2 new protocols for electronic cash system, which give in a simple way, three main features:non reusability of the coinsanonymity of the paymentsdivisibility of the coins, in order to be able to use a coin C in its totality, or subdividing it into several pieces.This simplicity seems to be an advantage over other previous work on this issue. In this paper, the way to address divisibility, and to use a variant of Schnorr scheme is original (§3.2,3.3,3.4)The last section gives information on practical aspects, like amount of computation, of storage, and of transmission.

Jean Claude Pailles
Modelling and analyzing cryptographic protocols using Petri nets

In this paper, we present a Petri net based methodology for the formal modelling and analysis of cryptographic protocols. We set up modelling rules that represent the protocols in terms of Petri nets. The modelling produces formal descriptions for the protocols with good visibility and layered abstraction. In particular, the descriptions clearly visualize the causal relations and constraints among the data flows in the protocols. An intruder model is introduced to formulate intruder attacks and to generate test cases against the cryptographic protocols. A procedure that exhaustively generates the test cases and searches for states that violate specified security criteria, is also proposed. We demonstrate the value of this methodology by applying it to a number of published protocols. In this way, we are able to reveal security flaws of these protocols. This methodology is applicable to both public-key and private-key based cryptographic protocols.

Benjamin B. Nieh, Stafford E. Tavares
On verifiable implicit asking protocols for RSA computation

The verifiable implicit asking is to speed up a certain feasible computation (e.g., y=xd mod n) based on a secret (d) stored in a relatively powerless device (called Client) with the help of powerful device(s) (called Server(s)) in such a way that Client can check the behavior of Server(s) and that the leakage of the secret to Server(s) should be suppressed as much as possible. Possible attacks to obtain Client's secret are classified into passive and active attacks. Passive attacks can be completely nullified by dividing the target computation into two parts so that one depends on d but the other does not and then by asking Server to do only the latter part. However since such a method brings relatively low speed-up performance, we discuss a method to obtain verifiable implicit asking protocols highly secure against passive attacks by modifying some base protocols which are fast enough but not completely free from passive attacks since sending to Server some information not independent from d.

Tsutomu Matsumoto, Hideki Imai, Chi-Sung Laih, Sung-Ming Yen
Modified Maurer-Yacobi's scheme and its applications

In Eurocrypt'91, Maurer and Yacobi developed a method for building a trapdoor into the one-way function of exponentiation modulo a composite number which enables an identity-based non-interactive key distribution system. In this paper, we provide some improvements of their scheme and then present a modified trapdoor one-way function by combining Maurer-Yacobi's scheme and RSA scheme. We demonstrate that a lot of applications can be constructed based on this modified scheme which are impossible in the original scheme. As examples, we present several protocols based on it, such as identifications, key distributions and signature schemes. We have implemented the Pohlig-Hellman and Pollard's ρ-methods for computing discrete logarithms modulo a composite number, which shows that average running time for computing logarithms is too large to be realizable in practice. Therefore, considering current algorithms and technology, we maintain that it is more efficient and practical to take a certificate-based scheme on which all protocols presented in this paper can be based as well.

Chae Hoon Lim, Pil Joong Lee
The vulnerability of geometric sequences based on fields of odd characteristic
Extended abstract

A new method of cryptologic attack on binary sequences is given, using their linear complexities relative to odd prime numbers. We show that, relative to a particular prime number p, the linear complexity of a binary geometric sequences is low. It is also shown that the prime p, can be determined with high probability by a randomized algorithm if a number of bits much smaller than the linear complexity is known. This determination is made by exploiting the imbalance in the number of zeros and ones in the sequences in question, and uses a new statistical measure, the partial imbalance.

Andrew Klapper
A fast cryptographic checksum algorithm based on stream ciphers

A design principle for the computation of a cryptographic checksum is proposed. Unlike most of the existing message authentication algorithms, the proposed scheme is based on stream cipher techniques and is non-iterative. In this scheme, a key stream sequence is used to demultiplex the message into two subsequences, which are then fed into two accumulating feedback shift registers to produce the checksum (also called message authentication code). The scheme is suitable for highspeed implementation and possesses valuable properties such as “perfect hashing”, “perfect MAC” and complete key diffusion.

Xuejia Lai, Rainer A. Rueppel, Jack Woollven
An approach to the initial state reconstruction of a clock-controlled shift register based on a novel distance measure

The initial state reconstruction problem of a clock-controlled shift register is considered when the characteristic polynomial, a segment of the output sequence and the probability of ones in the clock sequence are known. This problem is more general than the considered one (in [2]), and it is solved using a quite different approach. A novel distance measure for comparison of two different length binary sequences is proposed and its main characteristics relevant for the cryptanalysis is derived. An algorithm for the cryptanalysis based on the proposed distance measure is presented and its main characteristics are pointed out. Expected minimal length of the observed sequence for the unique initial state solution is estimated. Illustrative numerical examples are included.

Miodrag J. Mihaljevic
Construction of m-ary de Bruijn sequences (extended abstract)

A new method for constructing large numbers of m-ary de Bruijn sequences is given.

Jun-Hui Yang, Zong-Duo Dai
Information technology security standards — An Australian perspective

From a telecommunications perspective, standards facilitate the implementation of distributed applications. Such systems can be implemented using components produced by different suppliers, at different times, and in ways that involve a minimum of proprietary intellectual property. As such open systems become widely implemented, it is becoming increasingly important to have standards for security services and mechanisms to allow the interests of all interconnected parties to be protected. This paper discusses the role of standards in providing a link between the large body of available theory, and business needs. A standardised approach has the following advantages:agreement can be reached on the meaning of security terminology;security mechanisms can be subject to international, expert scrutiny before adoption;common security mechanisms can be developed in such a way that re-use is possible; andthe limited amount of available technical expertise can be efficiently used and made accessible to all parts of industry and government.When an analysis is made of security standardisation activities around the world, it is quickly appreciated that we are in fact well away from realising an optimum approach to security standards development. However, there is still much to be gained from the standardisation process. This paper looks at the range of security standardisation activities, and then focuses on the work being done to develop generic (basic) security building block standards in the International Organisation for Standardisation (ISO)/International Electrotechnical Commission (IEC), Joint Technical Committee 1, Subcommittee 27. The range of Subcommittee 27 activities is summarised, and an update is given of progress to date. This status is then placed in the perspective of related Australian standardisation activity.

John Snare
Non-interactive generation of shared pseudorandom sequences

We address the following problem: given a random seed secretly shared among a group of individuals, non-interactively generate pieces corresponding to a much longer shared pseudorandom sequence. Shared randomness is an essential resource in distributed computing and non-interactive ways of generating it can be useful in applications such as Byzantine Agreement, common coin flipping or secure computation protocols.Our first result is negative: well known cryptographically strong pseudorandom number generators cannot be evaluated without interaction and, in particular, it is shown that constructions that recursively apply a one-way function to a random seed and output at each iteration the simultaneously hard bits in the input of the one-way function are actually incompatible with a homomorphic evaluation.On the other hand, we show that pseudorandom generators that can be both proven cryptographically strong and sharedly evaluated without interaction do exist. A concrete implementation, under the RSA assumption, is described.

Manuel Cerecedo, Tsutomu Matsumoto, Hideki Imai
A generalized description of DES-based and Benes-based permutationgenerators

The construction of pseudorandom permutation generators has been of major interest since Luby and Rackoffs first description. Numerous papers have been dedicated to their simplification. Beside the original DES-based further constructions of pseudorandom permutation generators have been introduced. One of these is based on Benes-networks which are well known tools in the field of parallel processing. Up to now both constructions had apparently nothing much in common but their pseudorandomness. In this paper a new type of construction, the Clos-based permutation generator, is introduced. The DES-based and the Benes-based generators are shown to be special cases of this new type. Viewing both as Clos-based generators gives new insights into known results: the original construction of the Benes-based generators can be improved; some of the already known impossibility-results concerning the DES-based generators are now better understandable; finally it is possible, to formulate new necessary conditions for the pseudorandomness of generated permutations.

Michael Portz
Prime generation with the Demytko-Miller-Trbovich algorithm

Prime numbers satisying certain constraints are used in public-key cryptosystems. Consequently, some attention has been paid to algorithms which allow large, cryptographically useful, primes to be created. One algorithm, devised by Miller and Trbovich, was neglected until Nick Demytko of Telecom Research Laboratories published a theoretical examination of it in 1988. At Telecom's invitation the following practical analysis of the method was undertaken. An implementation of the algorithm was written, and nearly one million primes generated, and analysed. This paper gives details of the algorithm, implementation, experiments and results, and draws conclusions about the applicability of the algorithm.

Leisa Condie
Constructions of feebly-one-way families of permutations

The unrestricted circuit complexity C(.) over the basis of all logic 2-input/1-output gates is considered. It is proved that certain explicitly defined families of permutations {fn} are feebly-one-way of order 2, i.e., the functions fn satisfy the property that, for increasing n, C(fn−1) approaches 2 · C(fn) while C(fn) tends to infinity. Both these functions and their corresponding complexities are derived by a method that exploits certain graphs called (n−1,s)-stars.

Alain P. L. Hiltgen
On bit correlations among preimages of “Many to one” One-way functions
A new approach to study on randomness and hardness of one-way functions

This paper presents a new measure of the complexity of many to one functions. We study bit correlations among the preimages of an element of the range of many to one one-way functions. Especially, we investigate the correlation among the least significant bit of the preimages of 2 to 1 one-way functions based on algebraic problems such as the factorization and the discrete logarithm.

Kouichi Sakurai, Toshiya Itoh
The fast cascade exponentiation algorithm and its applications on cryptography

In this paper, the evaluation of $$\prod\nolimits_{i = 1}^p {M_i^{b_i } }$$ considered and a fast algorithm is proposed. Through out this paper, the above evaluation is called the cascade exponentiation. The cases of P=1, P=2, and P=3 of the cascade exponentiations have special importance in cryptographic applications. The performance analysis and comparisons of the proposed algorithm and the best known method will be given.

Sung-Ming Yen, Chi-Sung Laih
The design of a conference key distribution system

In this paper, we propose a conference key distribution system for generating a common secret key for two or more users. In our system, each user possesses a secret key and a public key. Initially, the chairperson constructs a conference key associated with his secret key and the conference members' public keys. Then each member can obtain and authenticate the conference key by using his secret key. Further, we have shown that the security of our proposed system is based on the difficulty of breaking the Diffie-Hellman key distribution system.

Chin-Chen Chang, Tzong-Chen Wu, C. P. Chen
Remarks on “The design of a Conference Key Distribution System”

Chang, Wu and Chen propose a Conference Key Distribution System (CKDS) in these proceedings. This note describes an attack on their system and presents a possible solution.

Edward Zuk
Public-key cryptosystem based on the discrete logarithm problem

In 1985, T. ElGamal proposed a public-key cryptosystem and a signature scheme, in which the difficulty of breaking the system is based on the difficulty of computing a discrete logarithm in a finite group. For the same security level, the size of the ciphertext and the computational time of ElGamal's encryption are double those of the wellknown RSA scheme. In this paper, we propose a public-key cryptosystem based on the discrete logarithm, in which the size of the ciphertext and the computational time are the same as those of the RSA scheme, and the security level is the same as the ElGamal cryptosystem.

Lein Harn, Shoubao Yang
Elliptic curves over F p suitable for cryptosystems

Koblitz ([5]) and Miller ([6]) proposed a method by which the group of points on an elliptic curve over a finite field can be used for the public key cryptosystems instead of a finite field. To realize signature or identification schemes by a smart card, we need less data size stored in a smart card and less computation amount by it. In this paper, we show how to construct such elliptic curves while keeping security high.

Atsuko Miyaji
The probability Distribution of the Diffie-Hellman Key

The probability distribution of the key generated by the Diffie-Hellman Public Key-Distribution system is derived. For different prime factorizations of p−1, where p is the prime modulus of the Diffie-Hellman system, the probabilities of the most and the least likely Diffie-Hellman key are found. A lower bound for the entropy of the Diffie-Hellman key is also derived. For the case p−1=2q, with q prime, it is shown that the key distribution is very close to the uniform distribution and the key entropy is virtually the maximum possible. A tight upper bound on the probability of the most likely key is also derived, from which the form of the prime factorization of p−1 maximizing the probability of the most likely Diffie-Hellman key is found. The conditions for generating equally likely Diffie-Hellman keys for any prime factorization of p−1 is given.

Christian P. Waldvogel, James L. Massey
A modular exponentiation unit based on systolic arrays

The described architecture of a modular exponentiation unit with systolic modular multipliers shows the following features:•simple VLSI-implementation based on systolic arrays, which are improved versions of the multipliers proposed in [Atrubi65]•two identical systolic arrays for the implementation of Montomery's modulo multiplication method•small data-paths because of the serial operation mode•the required number of clock cycles for a modular multiplication depends on the actual size of the operands and not on the size of the systolic arrays•By the separation of the cells in the middle of the systolic arrays, the modular multiplier can be reconfigured such that two modular multipliers are available for the multiplication of operands with half of the size. This can be used for the parallel processing of an exponentiation using a half-sized modulus (less security requirements) or for an application of the Chinese Remainder Theorem.•The throughput and the area demand of a chip for modular exponentiations based on this architecture can be widely effected by the selection of the design parameters (base b, number of modular multipliers, number of registers).

Jörg Sauerbrey
A comparison of key distribution patterns constructed from circle geometries

A key distribution pattern is a combinatorial structure which provides a secure method of distributing secret keys among a number of participants in a cryptographic network. Inversive and Laguerre planes have been used to construct key distribution patterns with storage requirements lower than the trivial distribution system. In this paper we review these and introduce key distribution patterns arising from Minkowski planes, the third of the so-called circle geometries. In addition, we give a comparison of the storage requirements of the key distribution patterns associated with each of the circle geometries.

Christine M. O'Keefe
A block cipher method using combinations of different methods under the control of the user key

In this paper, we describe a 64-bit multi-round block cipher, suitable for software implementation, in which three different encryption methods are combined in a sequence determined by the user key. In this way, whilst the design is public knowledge, the actual encryption method selected by the user key is kept secret. This method has been implemented using the three block ciphers: Khufu, Loki, and a cipher by Lai and Massey. The performance and cryptanalysis results using the CRYPT-XB package for this example are provided.

Mike Rezny, Eddie Trimarchi
An attack on two hash functions by Zheng-Matsumoto-Imai

In [ZMI89, ZMI90] two constructions for a collision resistant hash function were proposed. The first scheme is based on a block cipher, and the second scheme uses modular arithmetic. It is shown in this paper that both proposals have serious weaknesses.

Bart Preneel, René Govaerts, Joos Vandewalle
Primality testing with Lucas functions

A generalization of Fermat's Little Theorem is derived by using Lucas functions. This generalization yields new classes of pseudoprimes and can be used to improve some well-known primality tests.

Rudolf Lidl, Winfried B. Müller
Backmatter
Metadaten
Titel
Advances in Cryptology — AUSCRYPT '92
herausgegeben von
Jennifer Seberry
Yuliang Zheng
Copyright-Jahr
1993
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-47976-5
Print ISBN
978-3-540-57220-6
DOI
https://doi.org/10.1007/3-540-57220-1