Skip to main content

2020 | Buch

A Course in Algebraic Error-Correcting Codes

insite
SUCHEN

Über dieses Buch

This textbook provides a rigorous mathematical perspective on error-correcting codes, starting with the basics and progressing through to the state-of-the-art. Algebraic, combinatorial, and geometric approaches to coding theory are adopted with the aim of highlighting how coding can have an important real-world impact. Because it carefully balances both theory and applications, this book will be an indispensable resource for readers seeking a timely treatment of error-correcting codes.

Early chapters cover fundamental concepts, introducing Shannon’s theorem, asymptotically good codes and linear codes. The book then goes on to cover other types of codes including chapters on cyclic codes, maximum distance separable codes, LDPC codes, p-adic codes, amongst others. Those undertaking independent study will appreciate the helpful exercises with selected solutions.

A Course in Algebraic Error-Correcting Codes suits an interdisciplinary audience at the Masters level, including students of mathematics, engineering, physics, and computer science. Advanced undergraduates will find this a useful resource as well. An understanding of linear algebra is assumed.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Shannon’s Theorem
Abstract
The content of this chapter is rather different in nature to what appears in the rest of the book, since Shannon’s theorem is really a theorem from information theory and not coding theory. However, we include it here because it tells us that reliable communication can be achieved using a noisy channel and sets a limit for what is feasible in terms of the proportion of data we can send whilst being almost sure to be able to recover the original message from the distorted signal. Essentially, this chapter is a very brief introduction to information theory, the mathematics this entails is probabilistic in nature, whereas later it will be more algebraic and to some extent geometric. It is not essential to the rest of the text and can be treated as optional.
Simeon Ball
Chapter 2. Finite Fields
Abstract
This chapter is a brief introduction to finite fields. The most important facts that will be established are that finite fields necessarily contain p h elements, for some prime number p and positive integer h, and that the field with p h elements is unique, up to isomorphism. We will study how to factorise cyclotomic polynomials over finite fields, which is used in Chapter 5 to construct cyclic codes. The projective and affine space over a finite field are also introduced and will appear later in various chapters.
Simeon Ball
Chapter 3. Block Codes
Abstract
The main parameters of an error correcting block code, which we will often refer to simply as a code, are its length and minimum distance. In this chapter, we shall primarily be concerned with the relationship between the size of the code and these parameters. If we fix the length of the code, then we wish to maximise the minimum distance and the size of the code, which are contrary aims. If we fix the minimum distance too, then we simply consider the problem of maximising the size of the code. We shall prove the Gilbert–Varshamov lower bound, which is obtained by constructing block codes of a given length and minimum distance by applying the greedy algorithm. We will prove various upper bounds which will put limits on just how good a block code one can hope to find of a fixed length and minimum distance. Since Shannon’s theorem is an asymptotic result telling us what rates we can achieve with a code of arbitrarily long length, we shall for a large part of this chapter focus on sequences of codes whose length tends to infinity. If we use nearest neighbour decoding then, so that the probability we decode correctly does not tend to zero, we will be interested in finding sequences of codes for which both the transmission rate and the ratio of the minimum distance to the length are bounded away from zero. We set aside trying to answer the question of how these codes are implemented until later chapters in which we work with codes which have more structure.
Simeon Ball
Chapter 4. Linear Codes
Abstract
There is a lack of structure in the block codes we have considered in the first few chapters. Either we chose the code entirely at random, as in the proof of Theorem 1.​12, or we built the code using the greedy algorithm, as in the proof of the Gilbert–Varshamov bound, Theorem 3.​7. In this chapter, we introduce some algebraic structure to the block codes by restricting our attention to linear codes, codes whose codewords are the vectors of a subspace of a vector space over a finite field. Linear codes have the immediate advantage of being fast to encode. We shall also consider a decoding algorithm for this broad class of block codes. We shall prove the Griesmer bound, a bound which applies only to linear codes and show how certain linear codes can be used to construct combinatorial designs.
Simeon Ball
Chapter 5. Cyclic Codes
Abstract
Although it will turn out that cyclic codes are not asymptotically good codes, they are an important class of codes which include many useful and widely implemented short length codes, most notably the Golay codes and the general class of BCH codes. BCH codes have a prescribed minimum distance which means that, by construction, we can bound from below the minimum distance and therefore guarantee some error-correction properties. Cyclic codes also provide examples of linear codes with few weights, which allows us to construct designs via Theorem 4.​22. The cyclic structure of these codes will appear again in Chapter 10, when we consider p-adic codes.
Simeon Ball
Chapter 6. Maximum Distance Separable Codes
Abstract
Two codewords of a block code of length n and minimum distance d must differ on any set of n − d + 1 coordinates, since they are at distance at least d from each other. This observation leads to the Singleton bound, Theorem 6.1. A code whose parameters give an equality in the Singleton bound is called a maximum distance separable code or simply an MDS code. Therefore, an MDS code is a block code in which every possible (n − d + 1)-tuple of elements of the alphabet occurs in a unique codeword for any set of n − d + 1 coordinates. The focus in this chapter will be on linear MDS codes, since not so much is known about non-linear MDS codes, and there are no known non-linear MDS codes which outperform linear MDS codes.
Simeon Ball
Chapter 7. Alternant and Algebraic Geometric Codes
Abstract
Alternant codes are subfield subcodes of a generalised Reed–Solomon code over an extension field of \({\mathbb F}_q\). This is a large class of linear codes which includes BCH codes, one of the families of cyclic codes which appeared in Chapter 5. Although BCH codes are not asymptotically good, we will prove that there are asymptotically good alternant codes. Not only are alternant codes linear, and so easy to encode, they also have an algebraic structure which can be exploited in decoding algorithms. However, as with the codes constructed in Theorem 3.​7, the construction of these asymptotically good alternant codes is probabilistic. We prove that such a code must exist without giving an explicit construction.
Simeon Ball
Chapter 8. Low Density Parity Check Codes
Abstract
A linear code with a check matrix in which each column has few non-zero entries is called a low density parity check code or, for brevity, an LDPC code. These codes were introduced in the 1960s by Gallager who proved that probabilistic constructions of such matrices produce asymptotically good linear codes. Moreover, he observed that LDPC codes perform well when applying the following decoding algorithm. On receiving a vector v, one calculates the weight of the syndrome of v + e, for each vector e of weight one. If the weight of this syndrome is less than the weight of the syndrome of v, for some e, then we replace v by v + e and repeat the process. If at each iteration there is such a vector e, then, since after replacing v by v + e, the weight of the syndrome of v decreases, we will eventually find a vector whose syndrome is zero, which must be the syndrome of some codeword u. We then decode v as u. If at some iteration no such e exists then the decoding breaks down. If at some iteration more than one such vector e exists, then one could choose e so that the weight of the syndrome of v + e is minimised. In this chapter we will prove that there are LDPC codes, constructed from graphs with the expander property, for which the decoding algorithm will not break down. Provided that the number of error bits is less than half the minimum distance, the decoding algorithm will return the nearest codeword to the received vector. We will use probabilistic arguments to construct the graphs, and from these a sequence of codes which are asymptotically good.
Simeon Ball
Chapter 9. Reed–Muller and Kerdock Codes
Abstract
In Chapter 6, we studied Reed–Solomon codes, codes whose codewords are the evaluation of polynomials in one variable of degree at most k − 1 at the elements of \({\mathbb F}_q \cup \{\infty \}\). Reed–Solomon codes are short length codes, where the length n is bounded by q + 1, and only useful when we take the field to be large. The alternant codes which we constructed from generalised Reed–Solomon codes in Chapter 7 allowed us to construct codes over small fields and we put this to good use. In this chapter we will consider another generalisation of Reed–Solomon codes, codes whose codewords are the evaluation of polynomials in many variables. This again allows us to construct linear codes over small fields and we will restrict our attention, for the most part, to binary linear codes. It will turn out that these codes are not asymptotically good. Nevertheless, they are an important class of codes which are widely implemented due to the availability of fast decoding algorithms. One example of such a decoding algorithm is the majority-logic decoding algorithm that we will study here. We will then go on and construct Kerdock codes which are certain subcodes of the second-order Reed–Muller codes. These codes can give examples of non-linear codes with parameters for which no linear code exists.
Simeon Ball
Chapter 10. p-Adic Codes
Abstract
The p-adic numbers were first considered by Hensel in the 19th century. He observed that the primes play an analogous role in the integers as linear polynomials do in \({\mathbb C}[X]\). The Laurent expansion of a rational function led him to consider the p-adic expansion of a rational number. In this chapter, for a fixed prime p, we will construct block codes over the rings \({\mathbb Z}/p^h{\mathbb Z}\) simultaneously, by constructing codes over the p-adic numbers and then considering the coordinates modulo p h. These codes will be linear over the ring but when mapped to codes over \({\mathbb Z}/p{\mathbb Z}\) will result in codes which are not equivalent to linear codes. We start with a brief introduction to p-adic numbers, which will cover enough background for our purposes. The classical cyclic codes, that we constructed in Chapter 5, lift to cyclic codes over the p-adic numbers. In the case of the cyclic Hamming code, this lift extends to a code over \({\mathbb Z}/4{\mathbb Z}\) which, when mapped to a binary code, gives a non-linear code with a set of parameters for which no linear code exists.
Simeon Ball
Backmatter
Metadaten
Titel
A Course in Algebraic Error-Correcting Codes
verfasst von
Simeon Ball
Copyright-Jahr
2020
Electronic ISBN
978-3-030-41153-4
Print ISBN
978-3-030-41152-7
DOI
https://doi.org/10.1007/978-3-030-41153-4

Premium Partner