Skip to main content
main-content

Über dieses Buch

When the 50th anniversary of the birth of Information Theory was celebrated at the 1998 IEEE International Symposium on Informa­ tion Theory in Boston, there was a great deal of reflection on the the year 1993 as a critical year. As the years pass and more perspec­ tive is gained, it is a fairly safe bet that we will view 1993 as the year when the "early years" of error control coding came to an end. This was the year in which Berrou, Glavieux and Thitimajshima pre­ sented "Near Shannon Limit Error-Correcting Coding and Decoding: Turbo Codes" at the International Conference on Communications in Geneva. In their presentation, Berrou et al. claimed that a combi­ nation of parallel concatenation and iterative decoding can provide reliable communications at a signal to noise ratio that is within a few tenths of a dB of the Shannon limit. Nearly fifty years of striving to achieve the promise of Shannon's noisy channel coding theorem had come to an end. The implications of this result were immediately apparent to all -coding gains on the order of 10 dB could be used to dramatically extend the range of communication receivers, increase data rates and services, or substantially reduce transmitter power levels. The 1993 ICC paper set in motion several research efforts that have permanently changed the way we look at error control coding.

Inhaltsverzeichnis

Frontmatter

Chapter 1. Introduction

Abstract
This book is the story of two papers: one that posed a remarkable problem to the community of research engineers, and a second paper that has come very close to finding a solution. The first is Shannon’s 1948 paper entitled “A Mathematical Theory of Communication” [Sha48], the paper that launched the twin fields of Information Theory and Error Control Coding. In this paper Shannon defined the concept of channel capacity. He then showed that, so long as the rate at which information is transmitted is less than the channel capacity, there exist error control codes that can provide arbitrarily high levels of reliability at the receiver output. The proof of this, the “Noisy Channel Coding Theorem, ” was existential and not constructive. We were left knowing that nice decoding schemes existed, but had no idea how to construct them. The subsequent fifty years of error control coding have been an effort to achieve this goal.
Chris Heegard, Stephen B. Wicker

Chapter 2. Binary Codes, Graphs, and Trellises

Abstract
In this chapter we focus on the structure and description of binary convolutional codes and their encoders. Both algebraic and graph-based methods are used to develop a generic description of the codes. It is shown that both recursive and feedforward encoders can be used to generate the same convolutional code. The distinctions between various encoders for a given code are discussed, and an emphasis is placed on the properties of recursive, systematic convolutional encoders. It is shown that such descriptive and analytic techniques can also be applied to block codes through the BCJR trellis construction technique.
Chris Heegard, Stephen B. Wicker

Chapter 3. Interleaving

Abstract
Interleaving is a standard signal processing technique used in a variety of communications systems. An interleaver is a device that takes symbols from a fixed alphabet at the input and produces the identical symbols at the output in a different temporal order. The classical use for interleaving is to “randomize” the locations of errors introduced in transmission, allowing for the use of random error correcting codes at the receiver. Such a situation occurs in (1) burst error channels (e.g., wireless communications channels) and (2) concatenated coding, where the first stage of decoding generates burst errors (e.g., a Viterbi decoder). The more recent application of interleaving is in the parallel concatenated encoders invented by Berrou, Glavieux and Thitimajshima [BGT93].
Chris Heegard, Stephen B. Wicker

Chapter 4. Concatenated Codes

Abstract
In this chapter we introduce concatenated error control systems. Concatenated systems use two or more component codes in an effort to provide a high level of error control performance at the expense of only a moderate level of complexity. The encoder consists of two or more component encoders that combine to generate a long code with good properties. The decoder uses the component code substructure of the concatenated encoder to realize a multi-stage implementation that is much less complex than a single-stage approach. In this chapter we consider both the original, serial form of concatenation as well as the more recent, parallel form. The former allows for various forms of iterative decoding that will be discussed briefly here. The latter, of course, was developed in conjunction with turbo iterative decoding, which will be the subject of a later chapter. Parallel concatenation is introduced here, and the details of performance and decoding are developed in the two chapters that follow. The chapter concludes with a generic description of concatenated codes that relates the two basic forms, while allowing for useful generalizations.
Chris Heegard, Stephen B. Wicker

Chapter 5. BCE and PCE Performance

Abstract
In this chapter we identify the structural characteristics of Binary Convolutional Encoders (BCE’s) and Parallel Concatenated Encoders (PCE’s) that determine their error control performance. It will be shown that one important factor for PCE performance is the spar-sity of codewords at weights near the free Hamming distance. Another is the manner in which information sequences are mapped to code or parity sequences. An analytic framework is provided for performance analyses over the Additive White Gaussian Noise (AWGN) channel. This framework can only provide performance bounds — an exact performance analysis has not yet been devised. The bounds are useful, however, both for conceptual and design purposes. For example, the bounds indicate that the size of the PCE interleaver is an important contributor to performance when the component encoders are systematic IIR BCE’s. This inference is readily verified through simulations.
Chris Heegard, Stephen B. Wicker

Chapter 6. Turbo Decoding

Abstract
The problem of decoding received, encoded data sequences can be formulated and solved in a number of ways. In this chapter we provide a general framework for the decoding problem. The basic concepts of measures and metrics are introduced. It is then shown how metrics can be used in different ways to realize different decoding algorithms, with an emphasis on the BCJR and Viterbi algorithms. These two algorithms are unified within a general framework that we call the “Generalized Viterbi Algorithm”. This is followed by a detailed description of turbo decoding: a low-complexity, suboptimal means for decoding serial and parallel concatenated codes. The chapter concludes with a discussion of mismatched decoding: a problem that arises when the channel statistics are not known or have been inaccurately specified. It is shown that turbo decoders are remarkably robust under such conditions, and can be made more so by including channel estimation within the decoding process.
Chris Heegard, Stephen B. Wicker

Chapter 7. Belief Propagation and Parallel Decoding

Abstract
Probabilistic reasoning can be modeled through the use of graphs — the vertices in the graphs represent random variables, while the edges represent dependencies between the random variables. Such representations play a fundamental role in the development of expert systems, in part because they allow for a rapid factorization and evaluation of the joint probability distributions of the graph variables [CGH97].
Chris Heegard, Stephen B. Wicker

Backmatter

Weitere Informationen