Skip to main content
main-content

Über dieses Buch

It is gratifying that this textbook is still sufficiently popular to warrant a third edition. I have used the opportunity to improve and enlarge the book. When the second edition was prepared, only two pages on algebraic geometry codes were added. These have now been removed and replaced by a relatively long chapter on this subject. Although it is still only an introduction, the chapter requires more mathematical background of the reader than the remainder of this book. One of the very interesting recent developments concerns binary codes defined by using codes over the alphabet 7l.4• There is so much interest in this area that a chapter on the essentials was added. Knowledge of this chapter will allow the reader to study recent literature on 7l. -codes. 4 Furthermore, some material has been added that appeared in my Springer Lec­ ture Notes 201, but was not included in earlier editions of this book, e. g. Generalized Reed-Solomon Codes and Generalized Reed-Muller Codes. In Chapter 2, a section on "Coding Gain" ( the engineer's justification for using error-correcting codes) was added. For the author, preparing this third edition was a most welcome return to mathematics after seven years of administration. For valuable discussions on the new material, I thank C.P.l.M.Baggen, I. M.Duursma, H.D.L.Hollmann, H. C. A. van Tilborg, and R. M. Wilson. A special word of thanks to R. A. Pellikaan for his assistance with Chapter 10.

Inhaltsverzeichnis

Frontmatter

Chapter 1. Mathematical Background

Abstract
In order to be able to read this book a fairly thorough mathematical background is necessary. In different chapters many different areas of mathematics play a rôle. The most important one is certainly algebra but the reader must also know some facts from elementary number theory, probability theory and a number of concepts from combinatorial theory such as designs and geometries. In the following sections we shall give a brief survey of the prerequisite knowledge. Usually proofs will be omitted. For these we refer to standard textbooks. In some of the chapters we need a large number of facts concerning a not too well-known class of orthogonal polynomials, called Krawtchouk polynomials. These properties are treated in Section 1.2. The notations that we use are fairly standard. We mention a few that may not be generally known. If C is a finite set we denote the number of elements of C by ∣C∣. If the expression B is the definition of concept A then we write A := B. We use “iff” for “if and only if”. An identity matrix is denoted by I and the matrix with all entries equal to one is J. Similarly we abbreviate the vector with all coordinates 0 (resp. 1) by 0 (resp. 1). Instead of using [x] we write ⌊x⌋ := max{n ∈ ℤ∣nx} and we use the symbol ⌈x⌉ for rounding upwards.
J. H. van Lint

Chapter 2. Shannon’s Theorem

Abstract
This book will present an introduction to the mathematical aspects of the theory of error-correcting codes. This theory is applied in many situations which have as a common feature that information coming from some source is transmitted over a noisy communication channel to a receiver. Examples are telephone conversations, storage devices like magnetic tape units which feed some stored information to the computer, telegraph, etc. The following is a typical recent example. Many readers will have seen the excellent pictures which were taken of Mars, Saturn and other planets by satellites such as the Mariners, Voyagers, etc. In order to transmit these pictures to Earth a fine grid is placed on the picture and for each square of the grid the degree of blackness is measured, say in a scale of 0 to 63. These numbers are expressed in the binary system, i.e. each square produces a string of six 0s and ls. The 0s and 1s are transmitted as two different signals to the receiver station on Earth (the Jet Propulsion Laboratory of the California Institute of Technology in Pasadena). On arrival the signal is very weak and it must be amplified. Due to the effect of thermal noise it happens occasionally that a signal which was transmitted as a 0 is interpreted by the receiver as a 1, and vice versa. If the 6-tuples of 0s and 1s that we mentioned above were transmitted as such, then the errors made by the receiver would have great effect on the pictures.
J. H. van Lint

Chapter 3. Linear Codes

Abstract
In this chapter we assume that information is coded using an alphabet Q with q distinct symbols. A code is called a block code if the coded information can be divided into blocks of n symbols which can be decoded independently. These blocks are the codewords and n is called the block length or word length (or just length). The examples in Chapter 2 were all block codes. In Chapter 13 we shall briefly discuss a completely different system, called convolutional coding, where an infinite sequence of information symbols i 0, i 1, i2,… is coded into an infinite sequence of message symbols. For example, for rate \( \frac{1}{2} \) one could have i 0, i 1, i 2,… → i 0, i0, i 1, i1,…, where i n is a function of i 0, i 1,…, i n . For block codes we generalize (2.1.3) to arbitrary alphabets.
J. H. van Lint

Chapter 4. Some Good Codes

Abstract
Let H n be a Hadamard matrix of order n (see (1.3.5)). In H n and — H n we replace — 1 by 0. In this way we find 2n rows which are words in F 2 n . Since any two rows of a Hadamard matrix differ in half of the positions we have constructed an (n, 2n, \( \frac{1}{2} \) n) code. For n = 8 this is an extended Hamming code. For n = 32 the code is the one used by Mariner 1969 which was mentioned in Section 2.1. In general these codes are called Hadamard codes.
J. H. van Lint

Chapter 5. Bounds on Codes

Abstract
In this chapter we shall be interested in codes that have as many codewords as possible, given their length and minimum distance. We shall not be interested in questions like usefulness in practice, encoding or decoding of such codes. We again consider as alphabet a set Q of q symbols and we define θ := (q – 1)/q. Notation is as in Section 3.1. We assume q has been chosen and then define an (n, *, d) code as a code with length n and minimum distance d. We are interested in the maximal number of codewords (i.e. the largest M which can be put in place of the *). An (n, M, d) code which is not contained in any (n, M + 1, d) code is called maximal.
J. H. van Lint

Chapter 6. Cyclic Codes

Abstract
In Section 4.5 we defined the automorphism group Aut(C) of a code C. Corresponding to this group there is a group of permutation matrices. Sometimes the definition of Aut(C) is extended by replacing permutation matrices by monomial matrices, i.e. matrices for which the nonzero entries correspond to a permutation matrix. In both cases we are interested in the group of permutations. In this chapter we shall study linear codes for which the automorphism group contains the cyclic group of order n, where n is the word length.
J. H. van Lint

Chapter 7. Perfect Codes and Uniformly Packed Codes

Abstract
In this chapter we shall restrict ourselves to binary codes. To obtain insight into the methods and theorems of this part of coding theory this suffices. Nearly everything can be done (with a little more work) for arbitrary fields F q . In the course of time many ways of studying perfect codes and related problems have been developed. The algebraic approach which will be discussed in the next section is perhaps the most elegant one. We start with a completely different method. We shall give an extremely elementary proof of a strong necessary condition for the existence of a binary perfect e-errorcorrecting code. The theorem was first proved by S. P. Lloyd (1957) (indeed for q = 2) using analytic methods. Since then it has been generalized by many authors (cf. [44]) but it is still referred to as Lloyd’s theorem. The proof in this section is due to D. M. Cvetković and J. H. van Lint (1977; cf. [17]).
J. H. van Lint

Chapter 8. Codes over ℤ4

Abstract
In 1994 it was shown (see [88], which we use as guideline for this chapter) that several well known good binary codes can be constructed by first constructing a code over the alphabet ℤ4 and then mapping the coordinates to ℤ 2 2 . We first study codes over ℤ4 in general.
J. H. van Lint

Chapter 9. Goppa Codes

Abstract
Consider once again the parity check matrix of a (narrow sense) BCH code as given in the proof of Theorem 6.6.2, i.e.
$$H = \left[ {\begin{array}{*{20}{c}}1&\beta &{{\beta ^2}}& \cdots &{{\beta ^{n - 1}}} \\ 1&{{\beta ^2}}&{{\beta ^4}}& \cdots &{{\beta ^{2\left( {n - 1} \right)}}} \\ \vdots & \vdots & \vdots &{}& \vdots \\ 1&{{\beta ^{d - 1}}}&{{\beta ^{2\left( {d - 1} \right)}}}& \ldots &{{\beta ^{\left( {d - 1} \right)\left( {n - 1} \right)}}} \end{array}} \right] $$
where β is a primitive nth root of unity in F qm , and each entry is interpreted as a column vector of length m over F q ; existence of β implies that n∣(q m — 1).
J. H. van Lint

Chapter 10. Algebraic Geometry Codes

Abstract
One of the major developments in the theory of error-correcting codes in the 80’s was the introduction of methods from algebraic geometry to construct good codes. The ideas are based on generalizations of the Goppa codes of the previous chapter. The algebraic geometry codes were also inspired by ideas of Goppa. In fact, the codes of Chapter 9 are now sometimes called “classical” Goppa codes, and those of this chapter “geometric” Goppa codes. We use the terminology “ algebraic geometry” codes.
J. H. van Lint

Chapter 11. Asymptotically Good Algebraic Codes

Abstract
In the previous chapters we have described several constructions of codes. If we considered these codes from the point of view of Chapter 5, we would be in for a disappointment. The Hadamard codes of Section 4.1 have \( \delta = \frac{1}{2} \) and if n → ∞ their rate R tends to 0. For Hamming codes R tends to 1 but δ tends to 0. For BCH codes we also find δ→ 0 if we fix the rate. For all examples of codes which we have treated, an explicitly defined sequence of these codes, either has δ→ 0 or R → 0.
J. H. van Lint

Chapter 12. Arithmetic Codes

Abstract
In this chapter we shall give a brief introduction to codes which are used to check and correct arithmetic operations performed by a computer. Operations are now ordinary arithmetic and as a result the theory is quite different from the preceding chapters. However, there is in several places a similarity to the theory of cyclic codes. In some cases we shall leave the details of proofs to the reader. For further information on this area see the references mentioned in Section 12.4.
J. H. van Lint

Chapter 13. Convolutional Codes

Abstract
The codes which we consider in this chapter are quite different from those in previous chapters. They are not block codes, i.e., the words do not have constant length. Although there are analogies and connections to block codes, there is one big difference, namely that the mathematical theory of convolutional codes is not well developed. This is one of the reasons that mathematicians find it difficult to become interested in these codes.
J. H. van Lint

Backmatter

Weitere Informationen