Skip to main content
Top
Published in:
Cover of the book

Open Access 2017 | OriginalPaper | Chapter

3. Soft Decision and Quantised Soft Decision Decoding

Authors : Martin Tomlinson, Cen Jung Tjhai, Marcel A. Ambroze, Mohammed Ahmed, Mubarak Jibril

Published in: Error-Correction Coding and Decoding

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
download
DOWNLOAD
print
PRINT
insite
SEARCH
loading …

Abstract

In many modern communication systems, the full benefits of error-correction coding are only realised if the decoder is able to utilise soft decision decoding. In this chapter, we give both approximate and exact bounds on the performance of soft decision decoding compared to hard decision decoding. The effects of soft decision quantisation are explored as a function of the number of quantisation levels. Examples are given of the different performance achieved using soft and quantised soft decision decoding for (100,50) and (200,100) binary codes. A modified version of the near maximum likelihood decoder, the Dorsch decoder described in Chap. 15, is presented for hard decision decoding. The performance improvements achieved by the decoder, compared to bounded distance decoding, are evaluated for some BCH codes.

3.1 Introduction

The use of hard decision decoding results in a decoding loss compared to soft decision decoding. There are several references that have quantified the loss which is a function of the operating \(\frac{E_b}{N_0}\) ratio, the error-correcting code and the quantisation of the soft decisions. Wozencraft and Jacobs [6] give a detailed analysis of the effects of soft decision quantisation on the probability of decoding error,\(P_{ec}\), for the ensemble of all binary codes of length n without restriction of the choice of code. Their analysis follows from the Coding Theorem, presented by Gallager for the ensemble of random binary codes [3].

3.2 Soft Decision Bounds

There are \(2^n\) possible binary combinations for each codeword, which in terms of the n-dimensional signal space hypercube corresponds to one vertex taken from \(2^n\) possible vertices. There are \(2^k\) codewords and therefore \(2^{nk}\) different possible codes. The receiver is considered to be composed of \(2^k\) matched filters, one for each codeword, and a decoder error occurs if any of the matched filter receivers has a larger output than the matched filter receiver corresponding to the transmitted codeword. Consider this matched filter receiver and another different matched filter receiver, and consider that the two codewords differ in d bit positions. The Hamming distance between the two codewords is d. The energy per transmitted bit is \(E_s=\frac{k}{n}E_b\), where \(E_b\) is the energy per information bit. The noise variance per matched filtered received bit, \(\sigma ^2=\frac{N_0}{2}\), where \(N_0\) is the single sided noise spectral density. In the absence of noise, the output of the matched filter receiver for the transmitted codeword is \(n\sqrt{E_s}\), and the output of the other codeword matched filter receiver is \((n-2d)\sqrt{E_s}\). The noise voltage at the output of the matched filter receiver for the transmitted codeword is denoted as \(n_c -n_1\), and the noise voltage at the output of the other matched filter receiver will be \(n_c + n_1\). The common noise voltage \(n_c\) arises from correlation of the bits common to both codewords with the received noise, and the noise voltages \(-n_1\) and \(n_1\) arise respectively from correlation of the other d bits with the received noise.
A decoder error occurs if
$$\begin{aligned} (n-2d)\sqrt{E_s}+n_c+n_1>n\sqrt{E_s}+n_c-n_1, \end{aligned}$$
(3.1)
that is, a decoder error occurs when \(2n_1>2d\sqrt{E_s}\).
The average noise power associated with \(n_1\) is \(d\sigma ^2=d\frac{N_0}{2}\), and as the noise is Gaussian distributed, the probability of decoder error, \(p_{d}\), is given by
$$\begin{aligned} p_d= \frac{1}{\sqrt{\pi d N_0}}\int _{d\sqrt{E_s}}^{\infty } \text {e}^{\frac{-x^2}{dN_0}}dx. \end{aligned}$$
(3.2)
This may be expressed in terms of the complementary error function
$$\begin{aligned} \text {erfc}(y)=2 \frac{1}{\sqrt{2\pi }}\int _{y}^{\infty } \text {e}^{\frac{-x^2}{2}}dx \end{aligned}$$
(3.3)
and leads to
$$\begin{aligned} p_d= \frac{1}{2}\text {erfc}\left( \sqrt{d\frac{k}{n} \frac{E_b}{N_0}}\right) \end{aligned}$$
(3.4)
Each of the other \(2^k-2\) codewords may also cause a decoder error but the weight distribution of the code \(C_i\) is unknown. However, by averaging over all possible codes, knowledge of the weight distribution of a particular code is not required. The probability of two codewords of a code \(C_i\), differing in d bit positions, \(p(d|C_i)\) is given by the Binomial distribution
$$\begin{aligned} p(d|C_i)=\frac{\frac{n!}{(n-d)!d!}}{2^n} \end{aligned}$$
(3.5)
A given linear code \(C_i\) cannot have codewords of arbitrary weight, because the sum of a sub-set of codewords is also a codeword. However, for non linear codes, \(p_{d}\) may be averaged over all of the codes without this constraint.
$$\begin{aligned} \overline{p_{C}}=\sum _{i=1}^{2^{n2^k}}p(d|C_i)p(C_i)<\frac{1}{2^{n2^k}}\sum _{d=0}^n\sum _{i=1}^{2^{n2^k}}\frac{\frac{n!}{(n-d)!d!}}{2^{n+1}}\text {erfc}\left( \sqrt{d\frac{k}{n} \frac{E_b}{N_0}}\right) \end{aligned}$$
(3.6)
rearranging the order of summation
$$\begin{aligned} \overline{p_{C}}<\frac{1}{2^{n2^k}}\sum _{i=1}^{2^{n2^k}}\sum _{d=0}^n\frac{\frac{n!}{(n-d)!d!}}{2^{n+1}}\text {erfc}\left( \sqrt{d\frac{k}{n} \frac{E_b}{N_0}}\right) \end{aligned}$$
(3.7)
and
$$\begin{aligned} \overline{p_{C}}<\frac{1}{2^{n+1}}\sum _{d=0}^n\frac{n!}{(n-d)!d!}\text {erfc}\left( \sqrt{d\frac{k}{n} \frac{E_b}{N_0}}\right) \end{aligned}$$
(3.8)
Remembering that any of the \(2^k-1\) matched filters may cause a decoder error, the overall probability of decoder error averaged over all possible binary codes \(\overline{p_{\tiny \text {overall}}}\), is
$$\begin{aligned} \overline{p_{\tiny \text {overall}}}=1-(1-\overline{p_{C}})^{2^k-1}<2^k\overline{p_{C}} \end{aligned}$$
(3.9)
and
$$\begin{aligned} \overline{p_{\tiny \text {overall}}}<\frac{2^k}{2^{n+1}}\sum _{d=0}^n\frac{n!}{(n-d)!d!}\text {erfc}\left( \sqrt{d\frac{k}{n} \frac{E_b}{N_0}}\right) \end{aligned}$$
(3.10)
An analytic solution may be obtained by observing that \(\frac{1}{2}\text {erfc}(y)\) is upper bounded by \(\text {e}^{-y^2}\),
$$\begin{aligned} \overline{p_{\tiny \text {overall}}}<\frac{2^k}{2^n}\sum _{d=0}^n\frac{n!}{(n-d)!d!}\text {e}^{-d\frac{k}{n} \frac{E_b}{N_0}} \end{aligned}$$
(3.11)
and as observed by Wozencraft and Jacobs [6],
$$\begin{aligned} (1+\text {e}^{-\frac{k}{n} \frac{E_b}{N_0}})^n=\sum _{d=0}^n\frac{n!}{(n-d)!d!}\text {e}^{-d\frac{k}{n} \frac{E_b}{N_0}} \end{aligned}$$
(3.12)
and
$$\begin{aligned} \overline{p_{C}}<\frac{1}{2^n}(1+\text {e}^{-\frac{k}{n} \frac{E_b}{N_0}})^n \end{aligned}$$
(3.13)
$$\begin{aligned} \overline{p_{\tiny \text {overall}}}<\frac{2^k}{2^n}(1+\text {e}^{-\frac{k}{n} \frac{E_b}{N_0}})^n \end{aligned}$$
(3.14)
Traditionally, a cut-off rate \(R_0\) is defined after observing that
$$\begin{aligned} \frac{2^k}{2^n}(1+\text {e}^{-\frac{k}{n} \frac{E_b}{N_0}})^n=2^k\left( \frac{1+\text {e}^{-\frac{k}{n} \frac{E_b}{N_0}}}{2}\right) ^n \end{aligned}$$
(3.15)
with
$$\begin{aligned} 2^{R_0}=\left( \frac{2}{1+\text {e}^{-\frac{k}{n} \frac{E_b}{N_0}}}\right) , \end{aligned}$$
(3.16)
then
$$\begin{aligned} \overline{p_{\tiny \text {overall}}}<2^k2^{-nR_0}=2^{k-nR_0}=2^{-n(R_0-\frac{k}{n})} \end{aligned}$$
(3.17)
This result may be interpreted as, providing the number of information bits of the code is less than the length of the code times the cut-off rate, then the probability of decoder error will approach zero as the length of the code approaches infinity. Alternatively, provided the rate of the code, \(\frac{k}{n}\), is less than the cut-off rate, \(R_0\), then the probability of decoder error will approach zero as the length of the code approaches infinity.
When s quantised soft decisions are used with integer levels 0 to \(2s-1\), for s even and integer levels 0 to \(s-1\) for s odd, the transmitted binary signal has levels 0 and \(2(s-1)\), for s even and levels 0 and \(s-1\), for s odd and the probability distribution of the quantised signal (bit) plus noise, after matched filtering, has probability \(p_i\), \(i=0\) to \(s-1\), represented as
$$\begin{aligned} p(z)=\sum _{i=0}^{s-1}p_iz^{-2i} \text {, for}\; s \; \text {even} \end{aligned}$$
(3.18)
and
$$\begin{aligned} p(z)=\sum _{i=0}^{s-1}p_iz^{-i} \text {, for}\; s \; \text {odd} \end{aligned}$$
(3.19)
A decoder error occurs if
$$\begin{aligned} s(n-2d)+n_c+n_1>sn+n_c-n_1 \end{aligned}$$
(3.20)
and occurs when
$$\begin{aligned} n_1>sd \end{aligned}$$
(3.21)
and has probability 0.5 when
$$\begin{aligned} n_1=sd \end{aligned}$$
(3.22)
The probability of decoder error may be determined from a summation of terms from the overall probability distribution for the sum of d independent, quantised noise samples, and is given by a polynomial \(q_d(z)\) at \(z=0\), where \(q_d(z)\) is given by
$$\begin{aligned} q_d(z)=p(z)^d \left( \frac{1-z^{(s-1)d+1}}{1-z}-0.5z^{(s-1)d}\right) \text {, for}\, s\, \text {even} \end{aligned}$$
(3.23)
The \(0.5z^{(s-1)d}\) term corresponds to \(n_1=sd\) when the probability of decoder error is 0.5.
$$\begin{aligned} q_d(z)=p(z)^d \left( \frac{1-z^{\frac{s-1}{2}d+1}}{1-z}-0.5z^{\frac{s-1}{2}d}\right) \text {, when} \,s \,\text {is odd} \end{aligned}$$
(3.24)
and the \(0.5z^{\frac{s-1}{2}d}\) term corresponds to \(n_1=sd\) when the probability of decoder error is 0.5.
The probability of decoder error is given by \(q_d(z)\) when \(z=0\),
$$\begin{aligned} p_d=q_d(0) \end{aligned}$$
(3.25)
The evaluation of the average probability of decoder error for quantised soft decisions, \(\overline{p_{C_Q}}\) is given, as before by averaging over all codes and rearranging the order of summation
$$\begin{aligned} \overline{p_{C_Q}}<\frac{1}{2^{n2^k}}\sum _{i=1}^{2^{n2^k}}\sum _{d=0}^n\frac{\frac{n!}{(n-d)!d!}}{2^n}q_d(0) \end{aligned}$$
(3.26)
Simplifying
$$\begin{aligned} \overline{p_{C_Q}}<\sum _{d=0}^n\frac{\frac{n!}{(n-d)!d!}}{2^n}q_d(0) \end{aligned}$$
(3.27)
When hard decisions are used, the probability of each transmitted bit being received in error is given by
$$\begin{aligned} p_b=0.5 \text {erfc}\left( \sqrt{\frac{k}{n}\frac{E_b}{N_0}}\right) \end{aligned}$$
(3.28)
Accordingly,
$$\begin{aligned} p(z)=1-p_b+p_bz^{-2} \end{aligned}$$
(3.29)
and \(q_d(z)\) for hard decisions becomes
$$\begin{aligned} q_d(z)=(1-p_b+p_bz^{-2})^d \left( \frac{1-z^{d+1}}{1-z}-0.5z^{d}\right) \end{aligned}$$
(3.30)
giving
$$\begin{aligned} \overline{p_{C_Q}}<\sum _{d=0}^n\frac{\frac{n!}{(n-d)!d!}}{2^n}(1-p_b+p_bz^{-2})^d \left( \frac{1-z^{d+1}}{1-z}-0.5z^{d}\right) \quad \text {for}\;z=0 \end{aligned}$$
(3.31)
As before, any of the \(2^k-1\) matched filters may cause a decoder error, the overall probability of decoder error averaged over all possible binary codes \(\overline{p_{overall_Q}}\), is
$$\begin{aligned} \overline{p_{overall_Q}}<1-(1-\overline{p_{C_Q}})^{2^k-1}<2^k\overline{p_{C_Q}} \end{aligned}$$
(3.32)
and
$$\begin{aligned} \overline{p_{overall_Q}}<\frac{2^k}{2^n}\sum _{d=0}^n\frac{n!}{(n-d)!d!}(1-p_b+p_bz^{-2})^d \left( \frac{1-z^{d+1}}{1-z}-0.5z^{d}\right) \text {, for z=0} \end{aligned}$$
(3.33)
When three-level quantisation is used for the received signal plus noise, a threshold, \(v_{thresh}\) is defined, whereby, if the magnitude of the received signal plus noise is less than \(v_{thresh}\), an erasure is declared otherwise a hard decision is made. The probability of an erasure, \(p_{\tiny \text {erase}}\) is given by
$$\begin{aligned} p_{\tiny \text {erase}}= \frac{2}{\sqrt{\pi N_0}}\int _0^{\sqrt{\frac{k}{n}E_b}-v_{thresh}} \text {e}^{\frac{-x^2}{N_0}}dx \end{aligned}$$
(3.34)
The probability of a bit error for the hard decision, \(p_b\), is now given by
$$\begin{aligned} p_b= \frac{1}{\sqrt{\pi N_0}}\int _{\sqrt{\frac{k}{n}E_b}+v_{thresh}}^\infty \text {e}^{\frac{-x^2}{N_0}}dx \end{aligned}$$
(3.35)
Accordingly, p(z) becomes
$$\begin{aligned} p(z)=1-p_b-p_{\tiny \text {erase}}+p_{\tiny \text {erase}}z^{-1}+p_bz^{-2} \end{aligned}$$
(3.36)
and \(q_d(z)\) for three-level soft decisions is
$$\begin{aligned} q_d(z)=(1-p_b-p_{\tiny \text {erase}}+p_{\tiny \text {erase}}z^{-1}+p_bz^{-2})^d \left( \frac{1-z^{d+1}}{1-z}-0.5z^{d}\right) \end{aligned}$$
(3.37)
giving
$$\begin{aligned} \overline{p_{overall_Q}}<\frac{2^k}{2^n}\sum _{d=0}^n&\Bigg (\frac{n!}{(n-d)!d!}(1-p_b-p_{\tiny \text {erase}}+p_{\tiny \text {erase}}z^{-1}+p_bz^{-2})^d\nonumber \\&\left( \frac{1-z^{d+1}}{1-z}-0.5z^{d}\right) \Bigg )\quad \text {for}\;z=0 \end{aligned}$$
(3.38)
There is a best choice of \(v_{thresh}\) which minimises \(\overline{p_{overall_Q}}\) and this is dependent on the code parameters, (nk), and \(\frac{E_b}{N_0}\). However, \(v_{thresh}\) is not an unduly sensitive parameter and best values typically range from 0.6 to \(0.7\sigma \). The value of \(0.65\sigma \) is mentioned in Wozencraft and Jacobs [6]. Optimum values of \(v_{thresh}\) are given in Fig. 3.1.

3.3 Examples

The overall probability of decoder error averaged over all possible binary codes has been evaluated for \(\frac{k}{n}=\frac{1}{2}\) for soft decisions, using Eq. (3.10), the approximation given by Eq. (3.14) and for hard decisions, using Eq. (3.38), for various code lengths. Results are shown in Fig. 3.2 for the ensemble of (100, 50) binary codes. The difference between the exact random coding bound, Eq. (3.10), and the original, approximate, random coding bound, Eq. (3.14) is about 0.5 dB for (100, 50) codes. The loss due to hard decisions is around 2.1 dB (at \(1\times 10^{-5}\) it is 2.18 dB), and for three-level quantisation is around 1 dB (at \(1\times 10^{-5}\) it is 1.03 dB). Also shown in Fig. 3.2 is the sphere packing bound offset by the loss associated with binary transmission.
Results are shown in Fig. 3.3 for the ensemble of (200, 100) binary codes. The difference between the exact random coding bound, Eq. (3.10), and the original, approximate, random coding bound, Eq. (3.14) is about 0.25 dB for (200, 100) codes. The loss due to hard decisions is around 2.1 dB, (at \(1\times 10^{-5}\) it is 2.15 dB) and for three-level quantisation is around 1 dB, (at \(1\times 10^{-5}\) it is 0.999 dB). Also shown in Fig. 3.3 is the sphere packing bound offset by the loss associated with binary transmission. The exact random coding bound is now much closer to the sphere packing bound, offset by the loss associated with binary transmission, with a gap of about 0.2 dB at \(10^{-8}\). It should be noted that the sphere packing bound is a lower bound whilst the random binary code bound is an upper bound.
Instead of considering random codes, the effect of soft decision quantisation is analysed for codes with a given weight spectrum. The analysis is restricted to two-level and three-level quantisation because these are the most common. In other cases, the quantisation is chosen such that near ideal soft decision decoding is realised. The analysis starts with a hypothetical code in which the Hamming distance between all codewords is the same, \(d_{min}\). The probability of decoder error due to a single matched filter having a greater output than the correct matched filter follows immediately from Eq. (3.4) and the code parameters may be eliminated by considering \(\frac{E_s}{N_0}\) instead of \(\frac{E_b}{N_0}\).
$$\begin{aligned} p_d= \frac{1}{2}\text {erfc}\left( \sqrt{d_{min}\frac{E_s}{N_0}}\right) \end{aligned}$$
(3.39)
For hard decisions and three-level quantisation, \(p_d\) is given by
$$\begin{aligned} p_d= (1-p_b-p_{\tiny \text {erase}}+p_{\tiny \text {erase}}z^{-1}+p_bz^{-2})^d_{min} \left( \frac{1-z^{d_{min}+1}}{1-z}-0.5z^{d_{min}}\right) \text {, for z=0} \end{aligned}$$
(3.40)
For hard decisions, \(p_{\tiny \text {erase}}\) is set equal to zero and \(p_b\) is given by Eq. (3.28). For three-level quantisation, \(p_{\tiny \text {erase}}\) is expressed in terms of \(\frac{E_{Qs}}{N_0}\), the \(\frac{E_{Qs}}{N_0}\) ratio required when quantised soft decision decoding is used.
$$\begin{aligned} p_{\tiny \text {erase}}= \frac{2}{\sqrt{\pi N_0}}\int _0^{\sqrt{E_{Qs}}-v_{thresh}} \text {e}^{\frac{-x^2}{N_0}}dx \end{aligned}$$
(3.41)
Similarly, the probability of a bit error for the hard decision, \(p_b\) is given by
$$\begin{aligned} p_b= \frac{1}{\sqrt{\pi N_0}}\int _{\sqrt{E_{Qs}}+v_{thresh}}^\infty \text {e}^{\frac{-x^2}{N_0}}dx \end{aligned}$$
(3.42)
By equating Eq. (3.39) with Eq. (3.40), the \(\frac{E_{Qs}}{N_0}\) required for the same decoder error probability may be determined as a function of \(\frac{E_{Qs}}{N_0}\) and \(d_{min}\). The loss, in dB, due to soft decision quantisation may be defined as
$$\begin{aligned} Loss_Q= 10\times log_{10}\frac{E_{Qs}}{N_0}-10\times log_{10}\frac{E_s}{N_0} \end{aligned}$$
(3.43)
Figure 3.4 shows the soft decision quantisation loss, \(Loss_Q\), as a function of \(d_{min}\) and \(\frac{E_s}{N_0}\) for hard decisions. For low \(d_{min}\), the loss is around 1.5 dB but rises rapidly with \(d_{min}\) to around 2 dB. For \(\frac{E_s}{N_0}\) = 3 dB, practical systems operate with \(d_{min}\) less than 15 or so because the decoder error rate is so very low (at \(d_{min}\) = 15, the decoder error rate is less than \(1\times 10_{-20}\)). Most practical systems will operate where the loss is around 2 dB. Low code rate systems (\(\frac{1}{3}\) or less) operate with negative \(\frac{E_s}{N_0}\) ratios with \(d_{min}\) in the range 25 to 40 whereas \(\frac{1}{2}\) code rate systems with \(d_{min}\) in the range 20 to 30 will typically operate at \(\frac{E_s}{N_0}\) around 0 dB. Of course not all decoder error events are \(d_{min}\) events, but the asymptotic nature of the loss produces an average loss of around 2 dB.
Figure 3.5 shows the soft decision quantisation loss, \(Loss_Q\), as a function of \(d_{min}\) and \(\frac{E_s}{N_0}\) for three-level soft decisions. An optimum threshold has been determined for each value of \(d_{min}\) and \(\frac{E_s}{N_0}\), and these threshold values are in terms of \(\sqrt{E_s}-y\times \sigma \) with \(y \times \sigma \) plotted against \(d_{min}\) in Fig. 3.1. Unlike the hard decision case, for three-level quantisation the lowest loss occurs at high \(d_{min}\) values. In common with hard decisions, the lowest loss is for the smallest \(\frac{E_s}{N_0}\) values, which are negative when expressed in dB. In absolute terms, the lowest loss is less than 1 dB for \(\frac{E_s}{N_0}= -3\) dB and high \(d_{min}\). This corresponds to low-rate codes with code rates of \(\frac{1}{3}\) or \(\frac{1}{4}\). The loss for three-level quantisation is so much better than hard decisions that it is somewhat surprising that three-level quantisation is not found more often in practical systems. The erasure channel is much underrated.

3.4 A Hard Decision Dorsch Decoder and BCH Codes

The effects of soft decision quantisation on the decoding performance of BCH codes may be explored using the extended Dorsch decoder (see Chap. 15) and by a bounded distance, hard decision decoder, first devised by Peterson [5], refined by Chien [2], Berlekamp [1] and Massey [4]. The extended Dorsch decoder may be used directly on the received three-level quantised soft decisions and of course, on the received unquantised soft decisions. It may also be used on the received hard decisions, to form a near maximum likelihood decoder which is a non bounded distance, hard decision decoder, but requires some modification.
The first stage of the extended Dorsch decoder is to rank the received signal samples in order of likelihood. For hard decisions, all signal samples have equal likelihood and no ranking is possible. However, a random ranking of k, independent bits may be substituted for the ranked k most reliable, independent bits. Provided the number of bit errors contained in these k bits is within the search space of the decoder, the most likely, or the correct codeword, will be found by the decoder. Given the received hard decisions contain t errors, and assuming the search space of the decoder can accommodate m errors, the probability of finding the correct codeword, or a more likely codeword, \(p_f\) is given by
$$\begin{aligned} p_f=\sum _{i=0}^m \frac{n!}{(n-i)!\;i!} \left( \frac{t}{n}\right) ^i\left( 1-\frac{t}{n}\right) ^{n-i} \end{aligned}$$
(3.44)
This probability may be improved by repeatedly carrying out a random ordering of the received samples and running the decoder. With N such orderings, the probability of finding the correct codeword, or a more likely codeword, \(p_{Nf}\) becomes more likely and is given by
$$\begin{aligned} p_{Nf}=1-\left( 1-\sum _{i=0}^m \frac{n!}{(n-i)!\;i!} \left( \frac{t}{n}\right) ^i\left( 1-\frac{t}{n}\right) ^{n-i}\right) ^N \end{aligned}$$
(3.45)
Increasing N gives
$$\begin{aligned} \left( 1-\sum _{i=0}^m \frac{n!}{(n-i)!\;i!} \left( \frac{t}{n}\right) ^i\left( 1-\frac{t}{n}\right) ^{n-i}\right) ^N \simeq 0 \end{aligned}$$
(3.46)
and
$$\begin{aligned} p_{Nf} \simeq 1 \end{aligned}$$
(3.47)
Of course there is a price to be paid because the complexity of the decoder increases with N. The parity check matrix needs to be solved N times. On the other hand, the size of the search space may be reduced because the repeated decoding allows several chances for the correct codeword to be found.
The modified Dorsch decoder and a bounded distance hard decision BCH decoder have been applied to the [63, 36, 11] BCH code and the simulation results are shown in Fig. 3.6. The decoder search space was set to search \(1\times 10^6\) codewords for each received vector which ensures that quasi maximum likelihood decoding is obtained. Also shown in Fig. 3.6 is the sphere packing bound for a (63, 36) code offset by the binary transmission loss. As can be seen, the unquantised soft decision decoder produces a performance close to the offset sphere packing bound. The three-level quantisation decoder results are offset approximately 0.9 dB at \(1\times 10^{-5}\) from the unquantised soft decision performance. For hard decisions, the modified Dorsch decoder has a performance approximately 2 dB at \(1\times 10^{-3}\) from the unquantised soft decision performance and approximately 2.2 dB at \(1\times 10^{-5}\). Interestingly, this hard decision performance is approximately 0.4 dB better than the bounded distance BCH decoder correcting up to and including 5 errors.
The results for the BCH (127, 92, 11) code are shown in Fig. 3.7. These results are similar to those of the (63, 36, 11) BCH code. At \(1\times 10^{-5}\) Frame Error Rate (FER), the unquantised soft decision decoder produces a performance nearly 0.2 dB from the offset sphere packing bound. The three-level quantisation decoder results are offset approximately 1.1 dB at \(1\times 10^{-5}\) from the unquantised soft decision performance. This is a higher rate code than the (63, 36, 11) code, and at \(1\times 10^{-5}\) the \(\frac{E_s}{N_0}\) ratio is 4.1 dB. Figure 3.5 for a \(d_{min}\) of 11 and an \(\frac{E_s}{N_0}\) ratio of 3 dB indicates a loss of 1.1 dB, giving good agreement to the simulation results. For hard decisions, the modified Dorsch decoder has a performance approximately 2 dB at \(1\times 10^{-3}\) from the unquantised soft decision performance, and approximately 2.1 dB at \(1\times 10^{-5}\). This is consistent with the theoretical hard decision losses shown in Fig. 3.4. As before, the hard decision performance obtained with the modified Dorsch decoder is better than the bounded distance BCH decoder correcting up to and including five errors, and shows almost 0.5 dB improvement.
The results for the BCH (127, 64, 21) code are shown in Fig. 3.8. This is an outstanding code, and consequently the unquantised soft decision decoding performance is very close to the offset sphere packing bound, being almost 0.1 dB away from the bound at \(1\times 10^{-5}\). However, a list size of \(10^7\) codewords was used in order to ensure that near maximum likelihood performance was obtained by the modified Dorsch decoder. Similar to before the three-level quantisation decoder results are offset approximately 1.1 dB at \(1\times 10^{-5}\) from the unquantised soft decision performance. However, \(3\times 10^7\) codewords were necessary in order to obtain near maximum likelihood performance was obtained by the modified Dorsch decoder operating on the three-level quantised decisions. The BCH bounded distance decoder is approximately 3 dB offset from the unquantised soft decision decoding performance and 1 dB from the modified Dorsch decoder operating on the quantised hard decisions.
These simulation results for the losses due to quantisation of the soft decisions show a very close agreement to the losses anticipated from the theoretical analysis.

3.5 Summary

In this chapter, we derived both approximate and exact bounds on the performance of soft decision decoding compared to hard decision decoding as a function of code parameters. The effects of soft decision quantisation were explored showing the decoding performance loss as a function of number of quantisation levels. Results were presented for the ensembles of all (100, 50) and (200, 100) codes. It was shown that the loss due to quantisation is a function of both \(d_{min}\) and SNR. Performance graphs showing the relationship were presented.
It was shown that the near maximum likelihood decoder, the Dorsch decoder described in Chap. 15, may be adapted for hard decision decoding in order to produce better performance than bounded distance decoding. Performance graphs were presented for some BCH codes showing the performance achieved compared to bounded distance decoding.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the book’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Literature
1.
go back to reference Berlekamp, E.: On decoding binary Bose-Chadhuri-Hocquenghem codes. IEEE Trans. Inf. Theory 11(4), 577–579 (1965)CrossRefMATH Berlekamp, E.: On decoding binary Bose-Chadhuri-Hocquenghem codes. IEEE Trans. Inf. Theory 11(4), 577–579 (1965)CrossRefMATH
2.
go back to reference Chien, R.: Cyclic decoding procedures for Bose-Chaudhuri-Hocquenghem codes. IEEE Trans. Inf. Theory 10(4), 357–363 (1964)CrossRefMATH Chien, R.: Cyclic decoding procedures for Bose-Chaudhuri-Hocquenghem codes. IEEE Trans. Inf. Theory 10(4), 357–363 (1964)CrossRefMATH
3.
go back to reference Gallager, R.G.: A simple derivation of the coding theorem and some applications. IEEE Trans. Inf. Theory 11(1), 459–470 (1960)MathSciNet Gallager, R.G.: A simple derivation of the coding theorem and some applications. IEEE Trans. Inf. Theory 11(1), 459–470 (1960)MathSciNet
5.
6.
go back to reference Wozencraft, J., Jacobs, I.: Principles of Communication Engineering. Wiley, New York (1965) Wozencraft, J., Jacobs, I.: Principles of Communication Engineering. Wiley, New York (1965)
Metadata
Title
Soft Decision and Quantised Soft Decision Decoding
Authors
Martin Tomlinson
Cen Jung Tjhai
Marcel A. Ambroze
Mohammed Ahmed
Mubarak Jibril
Copyright Year
2017
DOI
https://doi.org/10.1007/978-3-319-51103-0_3