Skip to main content

2000 | Buch

Information and Coding Theory

verfasst von: Gareth A. Jones, MA, DPhil, J. Mary Jones, MA, DPhil

Verlag: Springer London

Buchreihe : Springer Undergraduate Mathematics Series

insite
SUCHEN

Über dieses Buch

As this Preface is being written, the twentieth century is coming to an end. Historians may perhaps come to refer to it as the century of information, just as its predecessor is associated with the process of industrialisation. Successive technological developments such as the telephone, radio, television, computers and the Internet have had profound effects on the way we live. We can see pic­ tures of the surface of Mars or the early shape of the Universe. The contents of a whole shelf-load of library books can be compressed onto an almost weight­ less piece of plastic. Billions of people can watch the same football match, or can keep in instant touch with friends around the world without leaving home. In short, massive amounts of information can now be stored, transmitted and processed, with surprising speed, accuracy and economy. Of course, these developments do not happen without some theoretical ba­ sis, and as is so often the case, much of this is provided by mathematics. Many of the first mathematical advances in this area were made in the mid-twentieth century by engineers, often relying on intuition and experience rather than a deep theoretical knowledge to lead them to their discoveries. Soon the math­ ematicians, delighted to see new applications for their subject, joined in and developed the engineers' practical examples into wide-ranging theories, com­ plete with definitions, theorems and proofs.

Inhaltsverzeichnis

Frontmatter
1. Source Coding
Abstract
This chapter considers how the information emanating from a source can be encoded, so that it can later be decoded unambiguously and without delay. These two requirements lead to the concepts of uniquely decodable and instantaneous codes. We shall find necessary and sufficient conditions for a code to have these properties, we shall see how to construct such codes, and we shall prove Kraft’s and McMillan’s inequalities, which essentially say that such codes exist if and only if they have sufficiently long code-words.
Gareth A. Jones, J. Mary Jones
2. Optimal Codes
Abstract
We saw in Chapter 1 how to encode information so that decoding is unique or instantaneous. In either case the basic requirement, given by Kraft’s or McMillan’s inequality, is that we should use sufficiently long code-words. This raises the question of efficiency: if the code-words are too long, then storage may be difficult and transmission may be slow. We therefore need to strike a balance between using words which are long enough to allow effective decoding, and short enough for economy. Prom this point of view, the best codes available are those called optimal codes, the instantaneous codes with least average word- length. We will prove that they exist, and we will examine Huffman’s algorithm for constructing them. For simplicity, we will concentrate mainly on the binary case (r = 2), though we will briefly outline how these ideas extend to non-binary codes.
Gareth A. Jones, J. Mary Jones
3. Entropy
Abstract
The aim of this chapter is to introduce the entropy function, which measures the amount of information emitted by a source. We shall examine the basic properties of this function, and show how it is related to the average word- lengths of encodings of the source.
Gareth A. Jones, J. Mary Jones
4. Information Channels
Abstract
In this chapter we consider a source sending messages through an unreliable (or noisy) channel to a receiver. The “noise” in the channel could represent mechanical or human errors, or interference from another source. A good example is a space-probe, with a weak power-supply, sending back a message which has to be extracted from many other stronger competing signals. Because of noise, the symbols received may not be the same as those sent. Our aim here is to measure how much information is transmitted, and how much is lost in this process, using several different variations of the entropy function, and then to relate this to the average word-length of the code used.
Gareth A. Jones, J. Mary Jones
5. Using an Unreliable Channel
Abstract
In this chapter, we assume that we are given an unreliable channel Γ, such as a BSC with P < 1, and that our task is to transmit information through Γ as accurately as possible. Shannon’s Fundamental Theorem, which is perhaps the most important result in Information Theory, states that the capacity C of Γ is the least upper bound for the rates at which one can transmit information accurately through Γ. After first explaining some of the concepts involved, we will look at a simple example of how this accurate transmission might be achieved. A full proof of Shannon’s Theorem is technically quite difficult, so for simplicity we will restrict the proof to the case where Γ is the BSC; we will give an outline proof for this channel in §5.4, postponing a more detailed proof to Appendix C.
Gareth A. Jones, J. Mary Jones
6. Error-correcting Codes
Abstract
Our aim now is to construct codes C with good transmission-rates R and low error-probabilities PrE, as promised by Shannon’s Fundamental Theorem (§5.4). This part of the subject goes under the name of Coding Theory (or Error-correcting Codes), as opposed to Information Theory, which covers the topics considered earlier. The construction of such codes is quite a difficult task, and we will concentrate on a few simple examples to illustrate some of the methods used to construct more advanced codes.
Gareth A. Jones, J. Mary Jones
7. Linear Codes
Abstract
In Chapter 6 we considered several examples of linear codes, and in Lemma 6.8 we have already seen one advantage of dealing with them, namely that calculating the minimum distance of a linear code is easier than for general codes. In this chapter we will study linear codes in greater detail, noting other advantages to be obtained by applying elementary linear algebra and matrix theory, including an even simpler method for calculating the minimum distance. The theoretical background required includes such topics as linear independence, dimension, and row and column operations. These are normally covered in any first-year university linear algebra course; although such courses often restrict attention to vector spaces and matrices over the fields of real or complex numbers, all the important results and techniques we need extend in the obvious way to arbitrary fields, including finite fields. Throughout this chapter, we will assume that the alphabet F is the finite field F q of order q, for some primepower q=p e .
Gareth A. Jones, J. Mary Jones
Backmatter
Metadaten
Titel
Information and Coding Theory
verfasst von
Gareth A. Jones, MA, DPhil
J. Mary Jones, MA, DPhil
Copyright-Jahr
2000
Verlag
Springer London
Electronic ISBN
978-1-4471-0361-5
Print ISBN
978-1-85233-622-6
DOI
https://doi.org/10.1007/978-1-4471-0361-5