Skip to main content
Top

2003 | Book

Fundamentals of Computer Security

Authors: Prof. Dr. Josef Pieprzyk, Dr. Thomas Hardjono, Prof. Dr. Jennifer Seberry

Publisher: Springer Berlin Heidelberg

insite
SEARCH

Table of Contents

Frontmatter
1. Introduction
Abstract
In 1988, the New Encyclopædia Britannica defined cryptology as: The science concerned with communications in secure and usually secret form. It encompasses both cryptography and cryptanalysis. The former involves the study and application of the principles and techniques by which information is rendered unintelligible to all but the intended receiver, while the latter is the science and art of solving cryptosystems to recover such information.
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
2. Background Theory
Abstract
This chapter covers the main concepts and notions from Number Theory, Information Theory, and Complexity Theory that are frequently used in cryptology. Those readers with a strong mathematical background may wish to browse through this chapter or skip it completely.
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
3. Private-Key Cryptosystems
Abstract
Section 3.1 presents some classical ciphers for which both plaintext and cipher-text are characters or strings of characters. Section 3.2 covers the theory of modern cryptosystems and describes two early ciphers: Lucifer and DES. Section 3.3 presents five private-key block ciphers: FEAL, IDEA, RC6, Rijndael, and Serpent, of which the last three took part in the recent AES competition. The Rijndael algorithm won this and is currently proposed as the encryption standard. Section 3.4 and 3.5 introduce differential and linear cryptanalysis, respectively. Finally, Sect. 3.6 studies the principles of secure S-box design.
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
4. Public-Key Cryptosystems
Abstract
In 1976 Diffie and Hellman [152] described the framework for public-key cryptography. It was not until 1978 that three designs for public-key cryptosystems were published. Rivest, Shamir, and Adleman [431] showed how the discrete logarithm and factorization problems could be used to construct a public-key cryptosystem. This is the well-known RSA cryptosystem. Merkle and Hellman [339] used the knapsack problem in their construction. McEliece [329] built a system based on error correcting codes. Later in 1985 ElGamal [163] designed a public-key cryptosystem using the discrete logarithm problem. Koblitz [283] and Miller [346] suggested the use of elliptic curves in the design of public-key cryptosystems. Nowadays, there are quite a few more suggestions as to how to design public-key cryptosystems, but none so popular as the RSA and ElGamal cryptosystems.
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
5. Pseudorandomness
Abstract
Most physical processes expose some random behaviour. A good example of such random behaviour is noise in telecommunication channels. A great irony is that when there is a need for a source of random bits or numbers, then the ever-present randomness is in short supply. Generation of a large volume of random bits is usually very expensive and requires special hardware. Also the parameters of truly random generators can fluctuate, so they need to be calibrated and tested from time to time. The major drawback of truly random generators is the lack of reproducibility of the yielded bits and numbers. The reproducibility is crucial in simulations where there is a need to repeat the same experiments many times. It is also necessary in some cryptographic applications when, for instance, two communicating parties want to generate identical sequences from a shared secret (and short) key. From a cryptographic point of view, we are interested in deterministic algorithms that efficiently generate strings of bits and that cannot be distinguished from truly random ones. Readers interested in the subject are referred to Goldreich [205].
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
6. Hashing
Abstract
In many cryptographic applications, it is necessary to produce a relatively short fingerprint of a much longer message or electronic document. The fingerprint is also called a digest of the message. Cryptographic applications of hashing include, among others, the generation of digital signatures.
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
7. Digital Signatures
Abstract
Digital signatures should be, in a sense, similar to hand-written ones. Unlike a written one, an electronic document is not tied to any particular storage media. Thus it can be easily copied, transmitted, and manipulated. Digital signatures have to create some sort of digital “encapsulation” for the document, so any interference with either its contents or the signature will be detected with a very high probability. Typically, a signed document must be verifiable by anyone using some publicly accessible information.
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
8. Authentication
Abstract
Authentication is one of the basic cryptographic techniques. Its aim is to provide a receiver with some kind of proof that the information comes from the intended sender. In this chapter we are going to discuss authentication whose security is unconditional, i.e., its security is independent of the computational power of a potential attacker. Simmons wrote a good review on the subject in [475]. Stinson treated authentication in Chap. 10 of his book [496].
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
9. Secret Sharing
Abstract
Secret sharing becomes indispensable whenever secret information needs to be kept collectively by a group of participants in such a way that only a qualified subgroup is able to reconstruct the secret. An example of such a scheme is a (t, n) threshold secret sharing in which there are n participants holding their shares of the secret and every t (tn) participants can collectively recreate the secret while any (t − 1) participants cannot get any information about the secret. The need for secret sharing arises if the storage system is not reliable, so there is a high likelihood that some pieces of information will be lost. Secret sharing is also useful if the owner of the secret does not trust any single person. Instead, the owner can deposit the secret with a group so that only a sufficiently large subgroup of members can reconstruct the secret. Threshold schemes were independently proposed by Blakley [41] and Shamir [464].
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
10. Group-Oriented Cryptography
Abstract
It may be required that the power to execute some operations is to be shared among members of a group. The recognition of such needs came when NIST tried to introduce the controversial Clipper Chip [368] with key escrowing to achieve legal wiretapping. The proposed escrowed encryption algorithm used two parties (called Key Escrow Agencies) to deposit the valid cryptographic key. Only if the two parties pooled their partial keys together, could ciphertext be decrypted.
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
11. Key Establishment Protocols
Abstract
So far we have tacitly assumed that all cryptographic algorithms can be readily used assuming that a suitable collection of secret and public keys is already distributed and known to the parties. For instance, secrecy systems based on secret key encryption require the same key to be shared by both the sender and receiver. In this chapter we focus our attention on how keys that are needed to enable cryptographic protection can be exchanged among the parties.
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
12. Zero-Knowledge Proof Systems
Abstract
Zero-knowledge (also called minimum disclosure) proof systems are indispensable wherever there is a necessity to prove the truth of a statment without revealing anything more about it. Zero-knowledge proofs involve two parties: the prover who claims that a statement is true, and the verifier who would like to be convinced that the statement is indeed true. The proof is conducted via an interaction between the parties. At the end of the protocol, the verifier is convinced only when the statement is true. If, however, the prover lies about the statement, the verifier will discover the lie with an overwhelming probability. The idea sprang out of interactive proof systems. Interactive proofs have gained a quite independent status as a part of computational complexity theory.
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
13. Identification
Abstract
Identification is usually one of the first safeguards that is used to protect computer resources against an unauthorized access. Any access control that governs how the computer resources are accessed and by whom assumes that there is an identification mechanism that works reliably.
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
14. Intrusion Detection
Abstract
Distributed systems emerged as a consequence of rapid progress in both computing and communication technology. A distributed system combines all computing resources into one “super” computer in which the underlying network provides the necessary communication facilities. The main advantage and, ironically, the major problem of distributed systems is its openness. The openness of the system permits sharing of all resources among users independently of their locations. At the same time, a distributed system is much more vulnerable to a potential attacker due to a distributed nature of the system. The communication network is typically too large to even attempt to protect it via some physical means. Widely used cryptographic methods may either detect illegal activity or render the transmitted data nonintelligent to an attacker. Some channels due to their characteristics may be subject to some specific attacks. For example, all broadcasting channels used for mobile and satellite communication are inherently vulnerable to eavesdropping. An attacker may be aware of some weaknesses in the security guards and choose them to compromise a part of or the whole system. In general, the designers of the security guards try to prevent any illegal user accessing the system. Numerous examples have shown that even the best protection mechanisms may fail because there is a flaw in the design, or more often because the mechanism was not designed to withstand some “exotic,” yet, practical attacks. So if the security guards fail, should we succumb and do nothing?
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
15. Electronic Elections and Digital Money
Abstract
Electronic banking, commerce, and elections are examples of services that are already accessible or will be in the near future on the Internet. Without exaggeration, one can say that most services that require a face-to-face contact will be replaced by their network versions with remote interaction between a client and the parties involved in the service. A distributed system provides the medium for interaction. By its nature, the distributed system allows us to perform the requested services (banking, voting, etc.) by exchange of information only. Needless to say, all stages of service must be converted into protocols, each of which achieves a well-defined goal (such as client-server mutual identification, establishing a secure communication channel, verification of client request, etc.). Network services can be seen as a collection of elementary protocols executed by the parties in order to provide the well-defined service to the client(s).
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
16. Database Protection and Security
Abstract
Database management systems are an important part of the operations of most computerized organizations. In many instances the data held within a database carry more value than the hardware and software used to manage and maintain the data. Consequently the privacy and security of data stored within database systems represents a major concern for organizations relying heavily on database management systems.
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
17. Access Control
Abstract
A computing environment can be seen as a collection of resources which are shared by user processes under the watchful eye of the operating system. The collection typically includes hardware resources (the CPU, the main memory, disk space, I/O devices, etc.) and software resources (editors, compilers, debugging tools, etc.). Sharing of resources can take on different forms and each form of sharing requires a different degree of operating system attention or control. For example, resources such as printers may be accessed by every process as long as the operating system puts the interested processes in a queue so they can access the printer sequentially in some order. An editor can be accessed concurrently by many processes as long as each process does not modify it. Normally, personal data files can be accessed by their owners only. The main task of the operating system (OS) is to control the access to system resources. The classification of computer entities into resources (passive) and processes (active) is not disjoint as a process can be also a resource to which another process would like to have access. In the access control vocabulary, passive entities or resources are called objects and active entities or processes are called subjects.
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
18. Network Security
Abstract
The invention of the telephone by Alexander Bell could be regarded as the starting point of the communication revolution. Very soon after this, the global telephone network made the world the “global village” for the first time in human history. The telephone network also provided the communication infrastructure for computer centers in the late 1970s and 1980s. For example, the first electronic funds transfer systems were built using the telephone network.
Josef Pieprzyk, Thomas Hardjono, Jennifer Seberry
Backmatter
Metadata
Title
Fundamentals of Computer Security
Authors
Prof. Dr. Josef Pieprzyk
Dr. Thomas Hardjono
Prof. Dr. Jennifer Seberry
Copyright Year
2003
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-07324-7
Print ISBN
978-3-642-07713-5
DOI
https://doi.org/10.1007/978-3-662-07324-7