Skip to main content
Top

Open Access 2017 | Open Access | Book

Cover of the book

Error-Correction Coding and Decoding

Bounds, Codes, Decoders, Analysis and Applications

Authors: Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Publisher: Springer International Publishing

Book Series : Signals and Communication Technology

insite
SEARCH

About this book

This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts:

Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems.

Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes.

Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes.

Part IV deals with decoders designed to realize optimum performance.

Part V describes applications which include combined error correction and detection, public key cryptography using Goppa codes, correcting errors in passwords and watermarking.

This book is a valuable resource for anyone interested in error-correcting codes and their applications, ranging from non-experts to professionals at the forefront of research in their field.

This book is open access under a CC BY 4.0 license.

Table of Contents

Frontmatter

Theoretical Performance of Error-Correcting Codes

Frontmatter

Open Access

Chapter 1. Bounds on Error-Correction Coding Performance
Abstract
The theoretical performance of binary codes for the additive white Gaussian noise (AWGN) channel is discussed. In particular, Gallager’s coding theorem for binary codes is treated extensively. By assuming a binomial weight distribution for linear codes, it is shown that the decoder error probability performance of some of the best, known linear, binary codes is the same as the average performance of the ensemble of all randomly chosen, binary nonlinear codes having the same length and dimension. Assuming a binomial weight distribution, an upper bound is determined for the erasure performance of any code, and it is shown that this can be translated into an upper bound for code performance in the AWGN channel. Finally, theoretical bounds on the construction of error-correction codes are discussed.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 2. Soft and Hard Decision Decoding Performance
Abstract
In this chapter, we discuss the performance of codes under soft and hard decision decoding. Upper and lower bounds on hard and soft decision decoding are discussed. For hard decision decoding, evaluation of the performance of specific codes shows that full decoding produces better performance than the usual bounded distance decoder. An analysis of the upper and lower union bounds on the probability of error for maximum likelihood soft decision decoding shows that contrary to hard decision decoding above relative low values of SNR per information bit, the two bounds coincide. The implications of this observation is that the soft decision decoding performance may be determined for a linear code by the number of minimum Hamming weight codewords without the need to determine the weight distribution of the code. Numerical performance comparisons are made for a wide range of different codes. It is shown that the binomial weight distribution provides a good indicative performance for codes whose weight distribution is difficult to obtain.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 3. Soft Decision and Quantised Soft Decision Decoding
Abstract
In many modern communication systems, the full benefits of error-correction coding are only realised if the decoder is able to utilise soft decision decoding. In this chapter, we give both approximate and exact bounds on the performance of soft decision decoding compared to hard decision decoding. The effects of soft decision quantisation are explored as a function of the number of quantisation levels. Examples are given of the different performance achieved using soft and quantised soft decision decoding for (100,50) and (200,100) binary codes. A modified version of the near maximum likelihood decoder, the Dorsch decoder described in Chap. 15, is presented for hard decision decoding. The performance improvements achieved by the decoder, compared to bounded distance decoding, are evaluated for some BCH codes.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Code Construction

Frontmatter

Open Access

Chapter 4. Cyclotomic Cosets, the Mattson–Solomon Polynomial, Idempotents and Cyclic Codes
Abstract
The important family of binary cyclic codes is explored in this chapter. Starting with cyclotomic cosets, the minimal polynomials are introduced. The Mattson–Solomon polynomial is described and it is shown to be an inverse discrete Fourier transform based on a primitive root of unity. The usefulness of the Mattson–Solomon polynomial in the design of cyclic codes is demonstrated. The relationship between idempotents and the Mattson–Solomon polynomial of a polynomial that has binary coefficients is also described. It is shown how binary cyclic codes may be easily derived from idempotents based on the cyclotomic cosets. It is demonstrated how useful this can be in the design of high-degree non-primitive binary cyclic codes. Several code examples using this construction method are presented. A table listing the complete set of the best binary cyclic codes, having the highest minimum Hamming distance, is included for all code lengths from 129 to 189 bits.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 5. Good Binary Linear Codes
Abstract
Two of the important performance indicators for a linear code are the minimum Hamming distance and the weight distribution. This chapter is concerned with methods and tools for producing new codes which are improvements to currently known codes. The chapter discusses and describes several efficient algorithms for computing the minimum distance and weight distribution of linear codes. Several different methods of constructing codes are described and several new codes are presented which are improvements to the codes listed in the database of best-known codes. Methods of combining known codes to produce good codes are also explored in detail. These methods are applied using cyclic codes as components to the new codes and several new binary codes are found as a result.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 6. Lagrange Codes
Abstract
Interpolation plays an important, mostly hidden role in algebraic coding theory. Reed–Solomon codes, BCH codes, and Goppa codes are all codes that may be constructed via interpolation. It is shown that all of these codes form part of a large family of generalised MDS codes. Also in this chapter, we discuss the encoding of BCH and Goppa codes using classical Lagrange interpolation. It is shown in detail how Goppa codes are designed and constructed starting from first principles. The parity check matrix of a BCH code is derived as a Goppa code proving that BCH codes are a subset of Goppa codes. Following on from this and using properties of the cyclotomic cosets it is explained why the minimum Hamming distance of some BCH codes exceeds the BCH bound. It is shown how these outstanding BCH codes can be identified and constructed. A little known paper by Goppa is discussed and as a result it is shown how Goppa codes and BCH codes may be extended in length with additional parity check bits resulting in increased minimum Hamming distance of the code. Several examples are given of the technique which results in some exceptional codes. Reed–Solomon codes are explored as a means of constructing binary codes resulting in some best known codes.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 7. Reed–Solomon Codes and Binary Transmission
Abstract
This chapter further discusses the Reed–Solomon codes which are exceptional, symbol based, because they are maximum distance separable (MDS) codes. Though these codes are not binary codes, they can be used as binary codes. The performance of Reed–Solomon codes when used on a binary channel is explored in this chapter and compared to codes which are designed for binary transmission. The construction of the parity-check matrices of RS codes for use as binary codes is described in detail for specific code examples. The simulation results for three differently constructed (150, 75) codes for the binary AWGN channel using a near maximum likelihood decoder are presented. Surprisingly the best performing code at \(10^{-4}\) error rate is not the best-known (150, 75, 23) code. It is shown that this is due to the higher multiplicities of weight 32–36 codeword errors. However, beyond \(10^{-6}\) error rates the best-known (150, 75, 23) code is the best code.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 8. Algebraic Geometry Codes
Abstract
This chapter introduces algebraic geometry (AG) codes. Algebraic geometry codes have good properties and some families of these codes have been shown to be asymptotically good. Concepts and notions relevant to AG codes are discussed with particular emphasis laid on the construction of AG codes. A generalised construction of AG codes is introduced and improvements to the tables of best-known codes are presented. This construction utilises the notion of places of higher degree of a curve as well as a new concatenation concept.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 9. Algebraic Quasi Cyclic Codes
Abstract
In this chapter, self-dual and binary double-circulant codes based on primes are discussed. Many double-circulant codes are self-dual. Binary self-dual codes form an important class of codes due to their powerful error-correcting capabilities and their rich mathematical structure. This structure enables the entire weight distribution of a code to be determined. With these interesting properties, this family of codes has been a subject of extensive research for many years. For these codes that are longer than around 150 bits, an accurate determination of the codeword weight distributions has been an unsolved challenge. We show that the code structure may be used in a new algorithm that requires less codewords to be enumerated than traditional methods. We describe the new algorithm in detail and present new, complete, weight distribution results for codes of length 152, 168, 192, 194 and 200. We describe a modular congruence method and show how it can be used to check weight distributions of codes and have corrected some mistakes in previously published results for codes of length 137 and 151. For evaluation of the minimum Hamming distance for very long codes, a new probabilistic method has been presented along with results for codes up to 450 bits long. It is conjectured that the (136, 68, 24) self-dual code is the longest extremal code, meeting the upper bound for minimum Hamming distance, and that no other, longer, extremal code exists.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 10. Historical Convolutional Codes as Tail-Biting Block Codes
Abstract
In this chapter, convolutional codes are explored from the point of view of their historical performance in comparison to the performance realised with modern, best decoding techniques. Convolutional codes are implemented as tail-biting block codes with near maximum likelihood decoding featuring an extended Dorsch decoder. It is shown that the convolutional codes designed for space applications and sequential decoding over 40 years ago are very good codes, comparable to the best codes known today. The performance realised back then was limited by the sequential decoder as demonstrated by the presented results. An additional 2 dB of coding gain could have been realised using the modern, extended Dorsch decoder instead of the sequential decoder. However back then, this decoder had yet to be discovered and was probably too expensive for the technology available at the time. It is also shown that convolutional codes may be used as the basis for designing double-circulant block codes and vice versa. In particular, high, guaranteed values of \(d_{free}\) may be obtained by basing convolutional codes on outstanding double-circulant codes. A memory 78, non-systematic, half rate convolutional code with a \(d_{free}\) of 32 is presented based on the (200, 100, 32) extended quadratic residue, double-circulant code.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 11. Analogue BCH Codes and Direct Reduced Echelon Parity Check Matrix Construction
Abstract
Many information sources are naturally analogue and must be digitised if they are to be transmitted digitally. The process of digitisation introduces quantisation errors and increases the bandwidth required. The use of analogue error-correcting codes eliminates the need for digitisation. It is shown that analogue BCH codes may be constructed in the same way as finite-field BCH codes, including Reed–Solomon codes, except that the field of complex numbers is used. It is shown how the Mattson–Solomon polynomial or equivalently the Discrete Fourier transform may be used as the basis for the construction of analogue BCH codes. It is also shown that a permuted parity check matrix produces an equivalent code, using a primitive root of unity to construct the code, as in discrete BCH codes. A new algorithm is presented which uses symbolwise multiplication of rows of a parity check matrix to produce directly the parity check matrix in reduced echelon form. The algorithm may be used for constructing reduced echelon parity check matrices for standard BCH and RS codes as well as analogue BCH codes. Gaussian elimination or other means of solving parallel, simultaneous equations are completely avoided by the method. It is also proven that analogue BCH codes are Maximum Distance Separable (MDS) codes. Examples are presented of using the analogue BCH codes in providing error-correction for analogue, band-limited data including the correction of impulse noise errors in BCH encoded, analogue stereo music waveforms. Since the data is bandlimited it is already redundant and the parity check symbols replace existing values so that there is no need for bandwidth expansion as in traditional error-correcting codes. Future research areas are outlined including the possibility of an analogue, maximum likelihood, error-correcting decoder based on the extended Dorsch decoder of Chap. 15. Steganography is another future application area for analogue BCH codes.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 12. LDPC Codes
Abstract
This chapter explores the design and performance of low-density parity-check (LDPC) codes which when combined with soft decision iterative decoding provide currently the best achievable performance in digital communication systems. The application of cyclotomic cosets, idempotents and Mattson–Solomon polynomials is shown to produce many new binary cyclic LDPC codes whose parity-check equations are orthogonal in each position. A key feature of this construction technique is the incremental approach to designing the minimum Hamming distance and the sparseness of the resulting parity-check matrix of the code. Binary cyclic LDPC codes are also constructed by considering idempotents in the Mattson–Solomon domain. It is shown that, for short algebraic LDPC codes, the myth of codes which have cycles of length 4 in their Tanner graph does not converge well with iterative decoding is not necessarily true. It is demonstrated that the cyclotomic coset-based construction can be easily extended to produce good non-binary algebraic LDPC codes. Good irregular LDPC codes may be constructed using the progressive edge-growth algorithm. Many new code results are presented showing the effects of choosing different degree distributions. Guidelines are given for designing the best codes. Methods of producing structured LDPC codes, such as those which have quasi-cyclic structure, are described. These are of interest to industry due to the simplification of the encoder and decoder. An example of such a construction to produce a (64800,48600) LDPC code, using a protograph, is presented along with performance results using iterative decoding. Better results are obtained with this code than the (64800,48600) LDPC code used in the DVB-S2 standard.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Analysis and Decoders

Frontmatter

Open Access

Chapter 13. An Exhaustive Tree Search for Stopping Sets of LDPC Codes
Abstract
The indicative performance of an LDPC code may be determined from exhaustive analysis of the low-weight spectral terms of the code’s stopping sets which by definition includes the low-weight codewords. In a landmark paper, Rosnes and Ytrehus showed that exhaustive, low-weight stopping set analysis of codes whose parity-check matrix is sparse can be carried out using a bounded tree search over the length of the code. For an (nk) code, the potential total search space is of size \(2^n\) but a good choice of bound dramatically reduces this search space to a practical size. Indeed, the choice of bound is critical to the success of the algorithm. It is shown in this chapter that an improved algorithm can be obtained if the bounded tree search is applied to a set of k information bits since the potential total search space is reduced to size \(2^k\). Since such a restriction will only find codewords and not all stopping sets, a class of bits is defined as unsolved parity bits, and these are also searched as appended bits in order to find all low-weight stopping sets. Weight spectrum and stopping set results are presented for commonly used WiMax LDPC codes in addition to some other well known LDPC codes.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 14. Erasures and Error-Correcting Codes
Abstract
It is shown that the number and weight of low-weight codewords of a linear code determine the erasure correcting performance of the code. Analysis is given of the probability density function of the number of erasures correctable by a given code in terms of the weight enumerator polynomial of the code. For finite-length codes and the erasure channel the best performance is achieved with maximum distance separable (MDS) codes and maximum likelihood decoding. Unfortunately, for the case of binary codes, there are no MDS codes, apart from trivial cases. However it is shown that for those binary codes that have a codeword weight distribution that is close to a binomial distribution the erasure correction performance is almost equal to that of MDS codes. Such binary codes include BCH, Goppa and double-circulant codes, and the erasure correction performance using several examples of these codes is presented. The contrasting performance of some LDPC and turbo codes is also given. A probabilistic method based on the parity-check matrix, is described which is particularly effective at finding low-weight codewords of any linear code. The method is based on randomly chosen erasure patterns and for most codes quickly determines the minimum Hamming distance of the code.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 15. The Modified Dorsch Decoder
Abstract
The Holy Grail in coding theory is the maximum likelihood decoder. Paradoxically, the best error-correcting codes are not commonly used in practical communication systems. This is because there is no practical decoder available that achieves anywhere near maximum likelihood performance. Inferior codes, such as LDPC or turbo codes, are used that perform much better with a practical decoder such as the iterative decoder. One day, may be in 100 years time, maximum likelihood decoders will be generally available and today’s best codes will then be used in practical systems. In this chapter a modified Dorsch decoder is described, which achieves near maximum likelihood performance for all binary codes that are not too long. It is a practical decoder for half rate codes having a length less than about 180 bits or so using current digital processors. The outstanding performance achieved, using several examples of the best binary codes known, is presented.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 16. A Concatenated Error-Correction System Using the Code Construction
Abstract
Concatenation of good codes is a classic method of constructing longer codes which are good codes. As codes are increased in length it becomes progressively harder to realise a near maximum likelihood decoder. This chapter presents a novel concatenated code arrangement featuring multiple, near maximum likelihood, decoders designed for an optimised matching of codes and decoders. By using some outstanding codes as constituent codes, the concatenated coding and decoding system is able to outperform the best LDPC and Turbo coding systems with the same code parameters. The performance of a net (256,128) code achieved with the concatenated arrangement is compared to a best (256,128) LDPC code and a best (256,128) Turbo code. Similarly, the performance of a (512,256) net concatenated code is compared to a best (512,256) LDPC code and a best (512,256) Turbo code. In both cases, the new system outperformed the LDPC and Turbo systems. To date, for the AWGN channel and net, half rate codes no other codes or coding arrangement is known that will outperform the system presented in this chapter for codes of lengths 256 and 512 bits.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Applications

Frontmatter

Open Access

Chapter 17. Combined Error Detection and Error-Correction
Abstract
This chapter is concerned with the design of codes and systems for combined error detection and correction, primarily aimed at applications featuring retransmission of data packets which have not been decoded correctly. Several such hybrid automatic request, HARQ, systems are described including a novel system variation which uses a retransmission metric based on a soft decision; the Euclidean distance between the decoded codeword and the received vector. It is shown that a cyclic redundancy check, CRC, is not essential for this system and need not be transmitted. The generator matrix of a nested set of block codes of length 251 bits is constructed by applying construction X three times in succession starting with an extended BCH (128, 113, 6) code. The result is the basis for an incremental-redundancy system whereby the first 128 bits transmitted is a codeword from the BCH code, followed by the transmission of a further 32 bits, if requested, producing a codeword from a (160, 113, 12) code. Further requests finally result in a codeword from a (251, 113, 20) code. Each request increases the chance of correct decoding by increasing the minimum Hamming distance of the net received codeword. Performance graphs are presented showing the comparative error rate performances and throughputs of the new systems compared to the standard HARQ system. Lower error floors and increased throughputs are evident.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 18. Password Correction and Confidential Information Access System
Abstract
This chapter considers the use of Reed–Solomon codes to correct user mistakes or missing parts of long entered passwords in an encrypted, information retrieval system or a password-based authentication system. Dynamic, user-specific mapping of Galois field elements is used to ensure that passwords, arbitrarily chosen by the user, are valid codewords. A typical system is described based on GF(256) and the ANSI character set with worked examples given for encoding and password correction. Security is also enhanced by the user-specific Galois field symbol mapping because this defeats Rainbow tables when used with long passwords.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 19. Variations on the McEliece Public Key Cryptoystem
Abstract
A completely novel type of public key cryptosystem was invented by Professor Robert McEliece in 1978 and this is based on error-correcting codes using Goppa codes. Other well-established public key cryptosystems are based on the difficulty of determining logarithms in finite fields which, in theory, can be broken by quantum computers. Despite numerous attempts by the crypto community, the McEliece system remains unbroken to this day and is one of the few systems predicted to survive attacks by powerful computers in the future. In this chapter, some variations to the McEliece system are described including a method which destroys the deterministic link between plaintext messages and ciphertexts, thereby providing semantic security. Consequently, this method nullifies the chosen-plaintext attack, of which the classic McEliece is vulnerable. It is shown that the public key size can be reduced and by encrypting the plaintext with a key derived from the ciphertext random error pattern, the security of the system is improved since an attacker has to determine the exact same error pattern used to produce the ciphertext. This defeats a chosen-ciphertext attack in which two random bits of the ciphertext are inverted. The standard McEliece system is vulnerable to this attack. The security of the McEliece system is analysed and a shortened ciphertext system is described which does not suffer from any consequent loss of security due to shortening. This is important because to achieve over 256 bits of security, the security analysis shows that the system needs to be based on Goppa codes of length 8192 bits. Key encapsulation and short plaintext applications need short ciphertexts in order to be efficient. It is shown that 256 bits of security is achieved with a shortened ciphertext of length 1912 bits, which conveys an information payload of 768 bits. Some examples of interesting applications that have been implemented on a smartphone in commercial products, such as a secure messaging app and secure cloud storage app, are also described in this chapter.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Open Access

Chapter 20. Error-Correcting Codes and Dirty Paper Coding
Abstract
This chapter describes how error-correcting codes can be used to impress additional information onto waveforms with a minimal level of distortion. Applications include watermarking and steganography. It is shown how the modified Dorsch decoder of Chap. 15 may be used to find codewords from partitioned classes of codewords, whose waveforms may be used as a watermark which is almost invisible, and still be reliably detected.
Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril
Backmatter
Metadata
Title
Error-Correction Coding and Decoding
Authors
Martin Tomlinson
Cen Jung Tjhai
Marcel A. Ambroze
Mohammed Ahmed
Mubarak Jibril
Copyright Year
2017
Electronic ISBN
978-3-319-51103-0
Print ISBN
978-3-319-51102-3
DOI
https://doi.org/10.1007/978-3-319-51103-0