Skip to main content

2018 | OriginalPaper | Buchkapitel

3. Lossless Data Compression

verfasst von : Fady Alajaji, Po-Ning Chen

Erschienen in: An Introduction to Single-User Information Theory

Verlag: Springer Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

As mentioned in Chap. 1, data compression describes methods of representing a source by a code whose average codeword length (or code rate) is acceptably small. The representation can be lossless (or asymptotically lossless) where the reconstructed source is identical (or identical with vanishing error probability) to the original source; or lossy where the reconstructed source is allowed to deviate from the original source, usually within an acceptable threshold. We herein focus on lossless data compression.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
In other words, our fixed-length codes are actually “fixed-to-fixed length codes” and our variable-length codes are “fixed-to-variable length codes” since, in both cases, a fixed number of source symbols is mapped onto codewords with fixed and variable lengths, respectively.
 
2
In the literature, both (nM) and (Mn) have been used to denote a block code with blocklength n and size M. For example, [415, p. 149] adopts the former one, while [83, p. 193] uses the latter. We use the (nM) notation since \(M=M_n\) is a function of n in general.
 
3
When one uses an encoder–decoder pair (fg) to describe the block code, the code’s operation can be expressed as \({\varvec{c}}_m=g(f(x^n))\).
 
4
This theorem, which is also called the entropy stability property, is due to Shannon [340], McMillan [267], and Breiman [58].
 
5
Chebyshev’s inequality as well as its proof can be found on p. 287 in Appendix B.
 
6
In the proof, we assume that the variance \(\sigma _X^2=\mathrm{Var}[-\log _2 P_X(X)]<\infty \). This holds since the source alphabet is finite:
$$\begin{aligned} \mathrm{Var}[-\log _2 P_X(X)]\le & {} E[(\log _2 P_X(X))^2]=\sum _{x\in \mathcal{{X}}}P_X(x)(\log _2 P_X(x))^2\\\le & {} \sum _{x\in \mathcal{{X}}} \frac{4}{e^2}[\log _2(e)]^2 = \frac{4}{e^2}[\log _2(e)]^2\times |\mathcal{{X}}| < \infty . \end{aligned}$$
 
7
Note that (3.2.2) is equivalent to \(\limsup _{n\rightarrow \infty }P_e({{\textstyle ~\mathcal{{C}}\!\!\!\!\!\!\!\sim }}_n)\le \varepsilon \). Since \(\varepsilon \) can be made arbitrarily small, the forward part actually indicates the existence of a sequence of D-ary block codes \(\{{{\textstyle ~\mathcal{{C}}\!\!\!\!\!\!\!\sim }}_n\}_{n=1}^\infty \) satisfying (3.2.1) such that \(\limsup _{n\rightarrow \infty }P_e({{\textstyle ~\mathcal{{C}}\!\!\!\!\!\!\!\sim }}_n)=0\). Based on this, the converse should be that any sequence of D-ary block codes satisfying (3.2.3) satisfies \(\limsup _{n\rightarrow \infty }P_e({{\textstyle ~\mathcal{{C}}\!\!\!\!\!\!\!\sim }}_n)>0\). However, the so-called strong converse actually gives a stronger consequence: \(\limsup _{n\rightarrow \infty }P_e({{\textstyle ~\mathcal{{C}}\!\!\!\!\!\!\!\sim }}_n)=1\) (as \(\epsilon \) can be made arbitrarily small).
 
8
Note that it is clear from the statement and proof of the forward part of Theorem 3.6 that the source entropy can be achieved as an asymptotic compression rate as long as \((1/n)\log _D M_n\) approaches it from above with increasing n. Furthermore, the asymptotic compression rate is defined as the limsup of \((1/n)\log _D M_n\) in order to guarantee reliable compression for n sufficiently large (analogously, in channel coding, the asymptotic transmission rate is defined via the liminf of \((1/n)\log _D M_n\) to ensure reliable communication for all sufficiently large n, see Chap. 4).
 
9
The definitions of stationarity and ergodicity can be found in Sect. B.3 of Appendix B.
 
10
If a Markov source is mentioned without specifying its order, it is understood that it is a first-order Markov source; see Appendix B for a brief overview on Markov sources and their properties.
 
11
See Sect. B.3 of Appendix B for the definition of irreducibility for Markov sources.
 
12
Another notation for the divergence rate is \(\lim _{n\rightarrow \infty } \frac{1}{n} D(X^n \Vert \hat{X}^n)\).
 
13
More generally, the Rényi entropy rate of order \(\alpha \), \(\lim _{n\rightarrow \infty } \frac{1}{n} H_\alpha (X^n)\), as well as the Rényi divergence rate of order \(\alpha \), \(\lim _{n\rightarrow \infty } \frac{1}{n} D_\alpha (P_{X^n} \Vert P_{\hat{X}^n})\), for arbitrary time-invariant Markov sources \(\{X_{i}\}\) and \(\{\hat{X}_{i}\}\) exist and admit closed-form expressions [311] (see also the earlier work in [279], where the results hold for more restricted classes of Markov sources).
 
14
The converse is also true; i.e., if a stationary process cannot be represented as a mixture of stationary ergodic processes, then it is ergodic.
 
15
A process \(\{Z_n\}_{n=1}^{\infty }\) is called exchangeable (or symmetrically dependent) if for every finite positive integer n, the random variables \(Z_1, Z_2,\ldots , Z_n\) have the property that their joint distribution is invariant with respect to all permutations of the indices \(1,2,\ldots , n\) (e.g., see [120]). The notion of exchangeability is originally due to de Finetti [90]. It directly follows from the definition that an exchangeable process is stationary.
 
16
Since we are measuring \(\rho _t\) in code bits/source symbol, all logarithms in its expression are in base 2, and hence this redundancy can be eliminated via asymptotically lossless binary block codes (one can also change the units to D-ary code symbol/source symbol using base-D logarithms for the case of D-ary block codes).
 
17
This theorem is also attributed to McMillan.
 
18
Another name for prefix codes is prefix-free codes.
 
Metadaten
Titel
Lossless Data Compression
verfasst von
Fady Alajaji
Po-Ning Chen
Copyright-Jahr
2018
Verlag
Springer Singapore
DOI
https://doi.org/10.1007/978-981-10-8001-2_3