The article delves into the decoding error probability of random parity-check matrix ensembles over the erasure channel, a crucial area in digital communication. It introduces the concept of error-correcting codes and their importance in mitigating channel noise. The focus is on the erasure channel, where symbols are either received correctly or totally erased. The study presents explicit formulas for the average decoding error probability under three decoding principles: unambiguous, maximum likelihood, and list decoding. Additionally, it computes error exponents for these decoding principles, providing a comprehensive analysis of the ensemble's performance. The paper also includes a strong concentration result for the unsuccessful decoding probability under unambiguous decoding. This research builds on previous work, offering stronger results and leveraging the fine algebraic structure of the ensemble. The explicit formulas and error exponents enable meaningful guidance in selecting good codes for the erasure channel, making this article a valuable resource for specialists in coding theory and error-correcting codes.
AI Generated
This summary of the content was generated with the help of AI.
Abstract
In this paper we carry out an in-depth study on the average decoding error probability of the random parity-check matrix ensemble over the erasure channel under three decoding principles, namely unambiguous decoding, maximum likelihood decoding and list decoding. We obtain explicit formulas for the average decoding error probabilities of the random parity-check matrix ensemble under these three decoding principles and compute the error exponents. Moreover, for unambiguous decoding, we compute the variance of the decoding error probability of the random parity-check matrix ensemble and the error exponent of the variance, which implies a strong concentration result, that is, roughly speaking, the ratio of the decoding error probability of a random linear code in the ensemble and the average decoding error probability of the ensemble converges to 1 with high probability when the code length goes to infinity.
Notes
Communicated by L. Mérai.
The research of Fang-Wei Fu was supported in part by the National Key Research and Development Program of China under Grants 2022YFA1005000 and 2018YFA0704703, in part by the National Natural Science Foundation of China under Grants 12141108, 62371259, 12226336, in part by the Fundamental Research Funds for the Central Universities of China (Nankai University), and in part by the Nankai Zhide Foundation. The research of M. Xiong was supported by RGC grant number 16306520 from Hong Kong.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1 Introduction
1.1 Background
In digital communication, it is common that messages transmitted through a public channel may be distorted by the channel noise. The theory of error-correcting codes is the study of mechanisms to cope with this problem. This is an important research area with many applications in modern life. For example, error-correcting codes are widely employed in cell phones to correct errors arising from fading noise during high frequency radio transmission. One of the major challenges in coding theory remains to construct new error-correcting codes with good properties and to study their decoding and encoding algorithms.
In a binary erasure channel (BEC), a binary symbol is either received correctly or totally erased with probability \(\varepsilon \). The concept of BEC was first introduced by Elias in 1955 [3]. Together with the binary symmetric channel (BSC), they are frequently used in coding theory and information theory because they are among the simplest channel models, and many problems in communication theory can be reduced to problems in a BEC. Here we consider more generally a q-ary erasure channel in which a q-ary symbol is either received correctly, or totally erased with probability \(\varepsilon \).
Advertisement
The problem of decoding linear codes over the erasure channel has received renewed attention in recent years due to their wide application in the internet and the distributed storage system in analyzing random packet losses [2, 11, 12]. Three important decoding principles, namely unambiguous decoding, maximum likelihood decoding and list decoding, were studied in recent years for linear codes over the erasure channel, the corresponding decoding error probabilities under these principles were also investigated (see [5, 9, 14, 17] and reference therein).
In particular in [14], upon improving previous results, the authors provided a detailed study on the decoding error probabilities of a general q-ary linear code over the erasure channel under the three decoding principles. Via the notion of \(q^\ell \)-incorrigible sets for linear codes, they showed that all these decoding error probabilities can be expressed explicitly by the r-th support weight distribution of the linear codes. As applications they obtained explicit formulas of the decoding error probabilities for some of the most interesting linear codes such as MDS codes, the binary Golay code, the simplex codes and the first-order Reed-Muller codes etc. where the r-support weight distributions were known. They also computed the average decoding error probabilities of a random \([n,k]_q\) code over the erasure channel and obtained the error exponent of a random \([n,nR]_q\) code (\(0<R<1\)) under unambiguous decoding. The error exponents of a random [n, nR] code under list and maximum likelihood decoding were obtained in [18].
1.2 Statement of the main results
In this paper we consider a different code ensemble, namely the random parity-check matrix ensemble\(\mathcal {R}_{m,n}\), that is, the set of all \(m \times n\) matrices over \(\mathbb {F}_q\) endowed with uniform probability, each of which is associated with a parity-check code as follows: for each \(H \in \mathcal {R}_{m,n}\), the corresponding parity-check code \(C_H\) is given by
Here boldface letters such as \(\textbf{x}\) denote row vectors.
The ensemble \(\mathcal {R}_{m,n}\) has been studied for a long time and many strong results have been obtained. For example, in the classical work of Gallager [7], an upper bound of the average number of codewords of a given weight in \(\mathcal {R}_{m,n}\) was obtained, from which information about the minimum-distance distribution of \(\mathcal {R}_{m,n}\) can be derived (see [7, Theorems 2.1\(-\)2.2]); in [1] (see also [4]) union bounds on the block erasure probabilities of the ensemble \(\mathcal {R}_{m,n}\) was obtained under the maximum likelihood decoding (see [1, 4]). More recently, the undetected error probability of the ensemble \(\mathcal {R}_{m,n}\) was studied in the binary symmetric channel by Wadayama [16] (i.e. \(q=2\)), and some bounds on the error probability under the maximum likelihood decoding principle were obtained in the q-ary erasure channel [6, 10]. It is easy to see that \(\mathcal {R}_{m,n}\) contains all linear codes in the random \([n,n-m]_q\) code ensemble considered in [14], but these two ensembles are different for two reasons: first, in the random \([n,k]_q\) code ensemble considered in [14], each \([n,k]_q\) code is counted exactly once, while in \(\mathcal {R}_{m,n}\) each code is counted with some multiplicity as different choices for the matrix H may give rise to the same code; second, some codes in \(\mathcal {R}_{m,n}\) may have rates strictly larger than \(1-\frac{m}{n}\) as the rows of H may not be linearly independent.
Advertisement
It is conceivable that most of the codes in \(\mathcal {R}_{m,n}\) have rate \(1-\frac{m}{n}\), and the average behavior of codes in \(\mathcal {R}_{m,n}\) should be similar to that of the random \([n,n-m]_q\) code ensemble considered in [14]. We will show that this is indeed the case. Actually we will obtain stronger results, taking advantage of the fine algebraic structure of the ensemble \(\mathcal {R}_{m,n}\), which may not be readily available in the random \([n,k]_q\) code ensemble. Such structure has been exploited in [16].
We first obtain explicit formulas for the average decoding error probability of the ensemble \(\mathcal {R}_{m,n}\) over the erasure channel under the three different decoding principles. This is comparable to [14, Theorem 2] for the random \([n,k]_q\) code ensemble. Such formulas are useful as they allow explicit evaluations of the average decoding error probabilities for any given m and n, hence giving us a meaningful guidance as to what to expect for a good \([n,n-m]_q\) code over the erasure channel.
Theorem 1
Let \(\mathcal {R}_{m,n}\) be the random matrix ensemble described above. Denote by the Gaussian q-binomial coefficient and denote
$$\begin{aligned} \psi _m(i):=\prod _{k=0}^{i-1}(1-q^{k-m}), \quad 1 \le i \le m. \end{aligned}$$
(2)
1.
The average unsuccessful decoding probability of \(\mathcal {R}_{m,n}\) under list decoding with list size \(q^\ell \), where \(\ell \) is a non-negative integer, is given by
(3)
2.
The average unsuccessful decoding probability of \(\mathcal {R}_{m,n}\) under unambiguous decoding is given by
The average decoding error probability of \(\mathcal {R}_{m,n}\) under maximum likelihood decoding is given by
(5)
Remark 1
By using the elementary identity
we see that \(P_{\textrm{ud}}(\mathcal {R}_{m,n},\varepsilon )=P_{\textrm{ld}}(\mathcal {R}_{m,n},0,\varepsilon )\). We list the formula for \(P_{\textrm{ud}}(\mathcal {R}_{m,n},\varepsilon )\) here to emphasis the unambiguous decoding, which we will study further later.
Remark 2
In Table 1, we use Mathematica to compute the numerical values of \(P_{\textrm{ud}}(\mathcal {R}_{m,n},\varepsilon ), P_{\textrm{mld}}(\mathcal {R}_{m,n},\varepsilon )\) and \(P_{\textrm{ld}}(\mathcal {R}_{m,n},\ell ,\varepsilon )\) (\(\ell =1,2\)) for \(m=50,n=100,q=2\) and \(\varepsilon =i*0.05\) with \(1 \le i \le 10\). It can be seen that when \(\varepsilon \) is much smaller than the rate of the code which is \(\frac{1}{2}\), the decoding error probabilities are all very small, in particular \(P_{\textrm{mld}}(\mathcal {R}_{m,n},\varepsilon )\) and \(P_{\textrm{ud}}(\mathcal {R}_{m,n},\varepsilon )\) are of the same magnitude, and \(P_{\textrm{ld}}(\mathcal {R}_{m,n},\ell ,\varepsilon )\) decreases even much further as \(\ell \) increases. The fact that these decoding error probabilities are small will be captured by the error exponents which we will study in Theorem 2.
Table 1
Decoding error probability for \(\mathcal {R}_{m,n}\) where \(m=50,n=100,q=2,\epsilon =i*0.05\) with \(1 \le i \le 10\)
To make a comparison of Theorem 1 with corresponding results for the random \([n,k]_q\) code ensemble, we take \(k=n-m\) and rewrite the explicit formulas obtained in [14, 18] as follows (see [14, Theorem 2] and [18, Section 6]): Denote by \(\mathcal {C}_{m,n}\) the random \([n,n-m]_q\) code ensemble.
1.
The average unsuccessful decoding probability of \(\mathcal {C}_{m,n}\) under list decoding with list size \(q^\ell \), where \(\ell \) is a non-negative integer, is given by
(6)
2.
The average unsuccessful decoding probability of \(\mathcal {C}_{m,n}\) under unambiguous decoding is given by
With this respect, the random code ensemble \(\mathcal {C}_{m,n}\) behaves slightly better than \(\mathcal {R}_{m,n}\) in terms of the average decoding error probabilities. On the other hand, the difference between these two ensembles seems to be rather small: we use Mathematica to compute the numerical values of the corresponding decoding error probabilities for \(\mathcal {C}_{m,n}\) but these values appear exactly the same as those in Table 1. As it turns out, the difference of the decoding error probabilities between these two ensembles are much smaller in magnitude, as can be seen in Table 2.
Table 2
Difference of decoding error probability between \(\mathcal {R}_{m,n}\) and \(\mathcal {C}_{m,n}\) where \(m=50,n=100,q=2,\epsilon =i*0.05\) with \(1 \le i \le 10\)
Next, letting \(m=(1-R)n\) for \(0< R < 1\), we compute the error exponents of the average decoding error probability of the ensemble series \(\{\mathcal {R}_{(1-R)n,n}\}\) as \(n \rightarrow \infty \) under these decoding principles.
Theorem 2
Let the rate \(0< R < 1\) be fixed and \(n \rightarrow \infty \).
1.
For any fixed integer \(\ell \ge 0\), the error exponent \(T_{\textrm{ld}}(\ell ,\varepsilon )\) for average unsuccessful decoding probability of \(\{\mathcal {R}_{(1-R)n,n}\}\) under list decoding with list size \(q^\ell \) is given by
The error exponents for average unsuccessful decoding probability of \(\{\mathcal {R}_{(1-R)n,n}\}\) under unambiguous decoding and maximum likelihood decoding (respectively) are both given by
It turns out that the error exponents obtained here under these decoding principles are identical with those for the random \([n,nR]_q\) code ensemble obtained in [14, Theorem 3] and [18, Theorems 1.3 and 1.4].
We establish a strong concentration result for the unsuccessful decoding probability of a random code in the ensemble \(\mathcal {R}_{(1-R)n,n}\) towards the mean under unambiguous decoding.
Theorem 3
Let the rate \(0< R < 1\) be fixed and \(n \rightarrow \infty \). Then as \(H_n\) runs over the ensemble \(\mathcal {R}_{(1-R)n,n}\), we have
Noting that in the range \(0<R<1-\varepsilon \), it was known that \(T_\textrm{ud}(\varepsilon ) > 0\) (see Theorem 2), hence
$$\begin{aligned} P_\textrm{ud}(\mathcal {R}_{(1-R)n,n},\varepsilon )=q^{-n(T_\textrm{ud}(\varepsilon )+o(1))} \rightarrow 0, \quad \text{ as } n \rightarrow \infty , \end{aligned}$$
so (11) shows that \(P_\textrm{ud}(H_n,\varepsilon )\) also tends to zero exponentially fast with high probability for the ensemble \(\mathcal {R}_{(1-R)n,n}\) under either Condition (1) or (2) of Theorem 3.
The paper is now organized as follows. In Sect. 2, we introduce the three decoding principles and the Gaussian q-binomial coefficient in more details. Then in Sect. 3, we provide three counting results regarding \(m \times n\) matrices of certain rank over \(\mathbb {F}_q\). Afterwards in Sects. 4, 5 and 6, we give the proofs of Theorems 1, 2 and 3 respectively. The proofs of Theorem 3 involves some technical calculus computations on the error exponent of the variance. In order to streamline the proofs, we put some of the arguments in Sect. 7 in Appendix. Finally we conclude this paper in Sect. 8.
2 Preliminaries
2.1 Three decoding principles, the average decoding error probability, and the error exponent
The three decoding principles have been well studied in the literature. For the sake of readers, we explain these terms here, following the presentation outlined in [14].
In a q-ary erasure channel, the channel input alphabet is a finite field \(\mathbb {F}_q\) of order q, and during transmission, each symbol \(x \in \mathbb {F}_q\) is either received correctly with probability \(1-\varepsilon \) or erased with probability \(\varepsilon \)\((0<\varepsilon <1)\).
Let C be a code of length n over \(\mathbb {F}_q\). For a codeword \(\textbf{c}=(c_1,c_2,\ldots ,c_n)\in C\) that was transmitted through the channel, suppose the word \(\textbf{r}=(r_1,r_2,\ldots ,r_n)\) is received. Denote by E the set of i’s such that \(r_i=\square \), i.e., the i-th symbol was erased during the transmission. In this case, we say that an erasure set E occurs. The probability P(E) that this erasure set E occurs for \(\textbf{c}\) is clearly \(\varepsilon ^{\#E}(1-\varepsilon )^{n-\#E}\). Here \(\#E\) denote the cardinality of the set E.
Let \(C(E,\textbf{r})\) be the set of all codewords of C that match the received word \(\textbf{r}\), that is,
Here \([n]=\{1,2,\ldots ,n\}\). The decoding problem is about how to choose one or more possible codewords in the set \(C(E,\textbf{r})\) when a word \(\textbf{r}\) is received. Now we consider three decoding principles:
In unambiguous decoding, the decoder outputs the only one codeword in \(C(E,\textbf{r})\) if \(\#C(E,\textbf{r})=1\) and declares “failure” otherwise. The unsuccessful decoding probability \(P_{\textrm{ud}}(C,\varepsilon )\) of C under unambiguous decoding is defined to be the probability that the decoder declares “failure”.
In list decoding, the decoder with list size \(q^{\ell }\) outputs all the codewords in \(C(E,\textbf{r})\) if \(\#C(E,\textbf{r})\le q^{\ell }\) and declares “failure” otherwise. The unsuccessful decoding probability \(P_{\textrm{ld}}(C,\ell ,\varepsilon )\) of C under list decoding with list size \(q^{\ell }\) is defined to be the probability that the decoder declares “failure”.
In maximum likelihood decoding, the decoder randomly choose a codeword in \(C(E,\textbf{r})\) uniformly and outputs this codeword. The decoding error probability \(P_{\textrm{mld}}(C,\varepsilon )\) of C under maximum likelihood decoding is defined to be the probability that the codeword outputted by the decoder is not the sent codeword.
The computation of \(P_{\textrm{ud}}(C,\varepsilon ), P_{\textrm{ld}}(C,\ell ,\varepsilon )\) and \(P_{\textrm{mld}}(C,\varepsilon )\) can be made much easier if C is a linear code.
Now assume that C is an \([n,k]_q\) linear code, that is, C is a k-dimensional subspace of \(\mathbb {F}_q^n\). For any \(E \subset [n]\), define
It is easy to see that if the set \(C(E,\textbf{r})\) is not empty, then the cardinality of \(C(E,\textbf{r})\) is the same as that of C(E), which is a vector space over \(\mathbb {F}_q\). Denote by \(\{I_i^{(\ell )}(C)\}_{i=1}^{n}\) the \(q^{\ell }\)-incorrigible set distribution of C, and \(\{I_i(C)\}_{i=1}^{n}\) the incorrigible set distribution of C, which are defined respectively as follows:
Here \(\dim C(E)\) denotes the dimension of C(E) as a vector space over \(\mathbb {F}_q\). We see that \(\dim C(E) \le \min \{k, \# E\}\), so if \(\ell \ge \min \{k, \# E\}\), then \(I_i^{(\ell )}(C)=\emptyset \). We also define
Recall from [14] that the values \(P_{\textrm{ud}}(C,\varepsilon ), P_{\textrm{mld}}(C,\varepsilon )\) and \(P_{\textrm{ld}}(C,\ell ,\varepsilon )\) can all be expressed in terms of \(I_i^{(\ell )}(C)\), \(I_i(C)\) and \(\lambda _i^{(\ell )}(C)\) as follows:
For \(H \in \mathcal {R}_{m,n}\), we write \(P_\textrm{ld}(H,\ell ,\varepsilon ):=P_\textrm{ld}(C_H,\ell ,\varepsilon )\) and \(P_*(H,\varepsilon ):=P_*(C_H,\varepsilon )\) for \(* \in \{\textrm{ud}, \textrm{mld}\}\), where \(C_H\) is the parity-check code defined by (1). The average decoding error probabilities over the ensemble \(\mathcal {R}_{m,n}\) are given by
Here the expectation \(\mathbb {E}\) is taken over the ensemble \(\mathcal {R}_{m,n}\).
Let \(0<R, \varepsilon <1\) be fixed constants. The error exponent \(T_{\textrm{ud}}(\varepsilon )\) for average unsuccessful decoding probability of the family of ensembles \(\{\mathcal {R}_{(1-R)n,n}\}\) under unambiguous decoding is defined as
provided that the limit exists [8, 14, 15]. The error exponents of \(\{\mathcal {R}_{(1-R)n,n}\}\) for the other two decoding principles are defined similarly.
2.2 Gaussian binomial coefficients
For integers \(n \ge k \ge 0\), the Gaussian binomial coefficients is defined as
where \((q)_n:=\prod _{i=1}^n \left( 1-q^i\right) \). By convention \((q)_0=1\), for any \(n \ge 0\) and if \(k<0\) or \(k>n\). The function \(\psi _m(i)\) defined in (2) can be written as
We may define \(\psi _m(0)=1\) for \(m \ge 0\) and \(\psi _m(i)=0\) if \(i<0\) or \(i>m\). Next, recall the well-known combinatorial interpretation of :
Lemma 1
([13]) The number of k-dimensional subspaces of an n-dimensional vector space over \(\mathbb {F}_q\) is .
The Gaussian binomial coefficient satisfies the property
and the identity
(16)
3 Three counting results for the ensemble \(\mathcal {R}_{m,n}\)
In this section we provide three counting results about matrices of certain rank in the ensemble \(\mathcal {R}_{m,n}\). Such results may not be new, but since it is not easy to locate them in the literature, we prove them here. These results will be used repeatedly in the proofs later on.
For \(H \in \mathcal {R}_{m,n}\), denote by \(\textrm{rk}(H)\) the rank of the matrix H over \(\mathbb {F}_q\).
Lemma 2
Let H be a random matrix in the ensemble \(\mathcal {R}_{m,n}\). Then for any integer j, we have
(17)
Proof
We may assume that j satisfies \(0 \le j \le n\), because if j is not in the range, then both sides of Equation (17) are obviously zero.
Denote by \(\textrm{Hom}(m,n)\) the set of \(\mathbb {F}_q\)-linear transformations from \(\mathbb {F}_q^m\) to \(\mathbb {F}_q^n\). Writing vectors in \(\mathbb {F}_q^m\) and \(\mathbb {F}_q^n\) as row vectors, we see that the random matrix ensemble \(\mathcal {R}_{m,n}\) can be identified with the set \(\textrm{Hom}(m,n)\) via the relation
$$\begin{aligned} H \,\,\, \leftrightarrow \,\,\, G : \begin{array}{c}\mathbb {F}_q^m \rightarrow \mathbb {F}_q^n\\ \textbf{x} \mapsto \textbf{x}H \end{array}. \end{aligned}$$
(18)
Since \(\textrm{rk}(H)=j\) if and only if \(\dim (\textrm{Im}G)=j\), and \(\#\mathcal {R}_{m,n}=q^{mn}\), we have
The inner sum \(\sum _{G \in \textrm{Hom}(m,n),\textrm{Im}G=V} 1\) counts the number of surjective linear transformations from \(\mathbb {F}_q^m\) to V, a j-dimensional subspace of \(\mathbb {F}_q^n\). Since \(V \cong \mathbb {F}_q^j\), this is also the number of surjective linear transformations from \(\mathbb {F}_q^m\) to \(\mathbb {F}_q^j\), or, equivalently, the number of \(m \times j\) matrices K over \(\mathbb {F}_q\) such that the columns of K are linearly independent. The number of such matrices K can be counted as follows: the first column of K can be any nonzero vector over \(\mathbb {F}_q\), there are \(q^m-1\) choices; given the first column, the second column can be any vector lying outside the space of scalar multiples of the first column, so there are \(q^m-q\) choices; inductively, given the first k columns, the \((k+1)\)-th column lies outside a k-dimensional subspace, so the number of choices for the \((k+1)\)-th column is \(q^m-q^k\). Thus we have
Let H be a random matrix in the ensemble \(\mathcal {R}_{m,n}\). Let \(A \subset [n]:=\{1,2,\ldots ,n\}\) be a subset with cardinality s. Denote by \(H_A\) the \(m \times s\) submatrix formed by columns of H indexed from A. Then for any integers j and r, we have
(20)
Proof
We may assume that \(0 \le j \le n\) and \(\max \{0,s-n+j\} \le r \le \min \{j,s\}\), because if j or r does not satisfy this condition, then both sides of Equation (20) are zero.
Using the relations (18) and (19), we can expand the term \(\mathbb {P}(\textrm{rk}(H)=j \cap \textrm{rk}(H_A)=r)\) as
Here \(V_A\) is the subspace of \(\mathbb {F}_q^A\) formed by restricting V to coordinates with indices from A. We may consider the projection given by
The kernel of \(\pi _A\) has dimension \(j-r\) and is of the form \(W \times \{(0)_A\}\) for some subspace \(W \le \mathbb {F}_q^{[n]-A}\). So we can further decompose the sum on the right hand side of (21) as
Now we compute the inner sum on the right hand side of (22). Suppose we are given an ordered basis of the \((j-r)\)-dimensional subspace \(W \times \{(0)_A\}\) of \(\mathbb {F}_q^n\). We extend it to an ordered basis of some j-dimensional subspace V as follows: first we need r other basis vectors \(\textbf{v}_1,\textbf{v}_2,\ldots ,\textbf{v}_r\) to be linearly independent. At the same time, they have to be linearly independent with any nonzero vector in \(\mathbb {F}_q^{[n]-A} \times \{(0)_A\}\) due to the kernel condition. This requires the set \(\{\pi _A(\textbf{v}_1), \pi _A(\textbf{v}_2), \ldots , \pi _A(\textbf{v}_r)\}\) to be linearly independent in \(\mathbb {F}_q^A\). On the other hand, if this condition is satisfied, then the r vectors \(\textbf{v}_1,\textbf{v}_2,\ldots ,\textbf{v}_r\) are also linearly independent with one another as well as with any nonzero vector in \(W \times \{(0)_A\}\). Therefore it reduces to counting the number of ordered linearly independent sets of r vectors in \(\mathbb {F}_q^A\). This number is clearly given by \(\prod _{k=0}^{r-1}(q^s-q^k)\), so the total number of different ordered bases is given by \(q^{r(n-s)}\prod _{k=0}^{r-1}(q^s-q^k)\).
On the other hand, given a fixed j-dimensional subspace V with \(\ker (\pi _A)=W \times \{(0)_A\}\), we count the number of ordered bases of V of the form stated in previous paragraph as follows: we choose \(\textbf{v}_1\) to be any vector in V but not in \(W \times \{(0)_A\}\), which gives \(q^j-q^{j-r}\) many choices for \(\textbf{v}_1\); similarly \(\textbf{v}_2\) is any vector in V but not in the span of \(W \times \{(0)_A\}\) and \(\textbf{v}_1\), this gives us \(q^j-q^{j-r+1}\) many choices for \(\textbf{v}_2\); using this argument, we see that the number of such ordered bases is given by \(\prod _{k=0}^{r-1} (q^j-q^{j-r+k})\).
We conclude from the above arguments that
Putting this into the right hand side of (22) and using Lemma 1 again, we get
The desired result is obtained immediately by plugging this into the right hand side of (21). This completes the proof of Lemma 3. \(\square \)
Lemma 4
Let H be a random matrix in the ensemble \(\mathcal {R}_{m,n}\). Let \(E,E'\) be subsets of [n] such that
It is easy to see that the two events \(\textrm{rk}(H_E)=i\) and \(\textrm{rk}(H_{E'})=i'\) are conditionally independent given \(\textrm{rk}(H_{E \cap E'})=s\), since columns of \(H_E\) and \(H_{E'}\) are independent as random vectors over \(\mathbb {F}_q\). Hence we get
Let C be an \([n,k]_q\) linear code. Recall from Sect. 2 that the values \(P_{\textrm{ud}}(C,\varepsilon ), P_{\textrm{mld}}(C,\varepsilon )\) and \(P_{\textrm{ld}}(C,\ell ,\varepsilon )\) can all be expressed explicitly as
where \(I_i(C)\) and \(I_i^{(\ell )}(C)\) are the incorrigible set distribution of C and the \(q^{\ell }\)-incorrigible set distribution of C respectively, and \(\lambda _i^{(\ell )}(C)\) is defined in (13) also in Sect. 2.
Now we can start the proof of Theorem 1. For \(H \in \mathcal {R}_{m,n}\), we denote
We now compute \(\mathbb {E}[\lambda _i^{(\ell )}]\). Noting that for \(H \in \mathcal {R}_{m,n}\) and \(E \subset [n]\), we have \(\textrm{rk}(H_E)+\dim C_H(E)=\#E\), thus
By the symmetry of the ensemble \(\mathcal {R}_{m,n}\), the inner sum on the right hand side depends only on the cardinality of E, so we may assume \(E=[i]\) to obtain
The right hand side is exactly \(\left( {\begin{array}{c}n\\ i\end{array}}\right) \mathbb {P}(\textrm{rk}(H)=i-\ell )\) where the probability is over the ensemble \(\mathcal {R}_{m,i}\). So from Lemma 2 we have
Inserting the above values \(\mathbb {E}[I_i^{(\ell )}]\) and \(\mathbb {E}[\lambda _i^{(\ell )}]\) into (28) and (29) respectively, we obtain explicit expressions of \(P_{\textrm{ld}}(\mathcal {R}_{m,n},\ell ,\varepsilon )\) and \(P_{\textrm{mld}}(\mathcal {R}_{m,n},\varepsilon )\), which agree with (3) and (5) of Theorem 1.
As for \(P_{\textrm{ud}}(\mathcal {R}_{m,n},\varepsilon )\), noting by (14) that \(\sum _{\ell =0}^i \mathbb {E}[\lambda _i^{(\ell )}]=\left( {\begin{array}{c}n\\ i\end{array}}\right) \) and by (30) that \(\mathbb {E}[\lambda _i^{(0)}]=\psi _m(i)\left( {\begin{array}{c}n\\ i\end{array}}\right) \), therefore
Inserting this value into (27), we also obtain the desired expression of \(P_{\textrm{ud}}(\mathcal {R}_{m,n},\varepsilon )\). This completes the proof of Theorem 1.
First recall that the error exponents of the average decoding error probability of the ensemble \(\mathcal {R}_{(1-R)n,n}\) over the erasure channel under the three decoding principles are defined by
if the limit exists say in (32) for \(\ell =0\). So we only need to prove Part 1) of Theorem 2 for the case of list decoding.
Write \(m=(1-R)n\), and we define
It is easy to see that \(f_{i,j} \ge 0\) for all integers i, j. In addition, \(f_{i,j} \ne 0\) if and only if \(1 \le i \le n, 1 \le j \le i\) and \(i-j \le m\).
Now we focus on the quantity \(f_{i,j}\). First, set \(T=q^{-1}\) so that \(0< T < 1\). It is easy to verify that for \(0 \le j \le i\),
(35)
Hence we have
The infinite product \((T)_\infty :=\prod _{r=1}^\infty (1-T^r)\) converges absolutely to some positive real number M which only depends on q, and \(M=(T)_\infty < (T)_m \le 1\) for any m. This implies that
We want to maximize this quantity over i, j satisfying (33). Since the term \(-\frac{j(m-i+j)}{n}\) is always non-positive, it is easy to see that for any fixed i, to maximize the term \(\frac{1}{n}\log _q f_{i,j}\), we shall take
$$\begin{aligned}j=\left\{ \begin{array}{lcl} \ell +1 & :& \text{ if } \ell +1 > i-m, \text{ or } \text{ equivalently } \text{ if } i < m+\ell +1, \\ i-m & :& \text{ otherwise }. \end{array}\right. \end{aligned}$$
The proof of Theorem 3 depend on the computation of the variance of the unsuccessful decoding probability under unambiguous decoding and its error exponent.
6.1 The variance of unsuccessful decoding probability and its error exponent
Note from (24) that the variance \(\sigma _{\textrm{ud}}^2(\mathcal {R}_{m,n},\varepsilon )\) of the unsuccessful decoding probability under unambiguous decoding can be expressed as
Here the multinomial coefficient \(\left( {\begin{array}{c}n\\ a,b,c,d\end{array}}\right) \) for any non-negative integers a, b, c, d, n such that \(a+b+c+d=n\) is given by
Obviously if i or \(i'\) does not satisfy the relation \(1 \le i, i' \le m\), then both sides of (38) are zero. So we may assume that \(1 \le i, i' \le m\).
For any \(H\in \mathcal {R}_{m,n}\), from (12) we see that
We remark that the proof of Theorem 5 follows a similar argument as that of Theorem 2, though here the computation of \(S_{\textrm{ud}}(\varepsilon )\) is much more complex as it involves a lot more technical manipulations. In order to streamline the idea of the proof in this section, we first assume Theorem 5 and leave its proof to Sect. 7 in Appendix. Then Theorem 3 can be proved easily by using the standard Chebyshev’s inequality.
that is, \(\frac{P_\textrm{ud}(H,\varepsilon )}{P_{\textrm{ud}}(\mathcal {R}_{(1-R)n,n},\varepsilon )} \rightarrow 1\)WHP as \(n \rightarrow \infty \).
To prove Theorem 3, it remains to verify that (44) holds true under the assumptions of either (1) or (2) of Theorem 3.
Case 1.\(\frac{1-\varepsilon }{1+(q-1)\varepsilon ^2} \le R < 1-\varepsilon \)
Since \(\frac{1-\varepsilon }{1+(q-1)\varepsilon }<\frac{1-\varepsilon }{1+(q-1)\varepsilon ^2}\), by a simple calculation, we have
Case 2.\(\frac{1-\varepsilon }{1+(q-1)\varepsilon } \le R < \frac{1-\varepsilon }{1+(q-1)\varepsilon ^2}\)
We can actually obtain a slightly more general result than (2) of Theorem 3 as follows:
Lemma 6
If (44) holds for \(R=R_0\) where \(R_0\) satisfies \(0<R_0 \le \frac{1-\varepsilon }{1+(q-1)\varepsilon ^2}\), then (44) also holds true for any \(R \in \left[ R_0,\frac{1-\varepsilon }{1+(q-1)\varepsilon ^2}\right] \).
Proof of Lemma 6
First, from Case 1 and the continuity of \(S_\textrm{ud}(\varepsilon )\) and \(T_\textrm{ud}(\varepsilon )\), we know that (44) holds for \(R=\frac{1-\varepsilon }{1+(q-1)\varepsilon ^2}\). Hence such \(R_0\) always exists.
Now if \(R \in \left[ \frac{1-\varepsilon }{1+(q-1)\varepsilon },\frac{1-\varepsilon }{1+(q-1)\varepsilon ^2}\right] \), then we have
which is negative for \(R \in \left[ \frac{1-\varepsilon }{1+(q-1)\varepsilon },\frac{1-\varepsilon }{1+(q-1)\varepsilon ^2}\right] \), and thus \(c(\varepsilon ,R)\) is convex \(\cap \) for R within that interval.
Next, for \(0< R < \frac{1-\varepsilon }{1+(q-1)\varepsilon }\), we note that
is a linear function in R with positive slope. Hence the function \(c(\varepsilon ,R)\) is convex \(\cap \) for R within the whole interval \(\left[ R_0,\frac{1-\varepsilon }{1+(q-1)\varepsilon ^2}\right] \), and Lemma 6 follows. \(\square \)
Now we let \(q \in \{2,3,4\}\). Consider \(R=\frac{1-\varepsilon }{1+(q-1)\varepsilon }\). Under this special value,
Therefore exactly one of the two roots for \(Q'(\varepsilon )\) lies in the desired range \(0< \varepsilon < 1\), and \(Q(\varepsilon )\) attains local maximum at that point. Since \(Q(0)=Q(1)=0\), we conclude that \(Q(\varepsilon )\) is positive for \(0< \varepsilon < 1\), and therefore (44) holds for \(R=\frac{1-\varepsilon }{1+(q-1)\varepsilon }\).
Now by Lemma 6, we see that Eq. (44) holds for \(R \in \left[ \frac{1-\varepsilon }{1+(q-1)\varepsilon },\frac{1-\varepsilon }{1+(q-1)\varepsilon ^2}\right] \) when \(q=2,3,4\). This completes the proof of Theorem 3.
Acknowledgements
The authors would like to acknowledge the editor and anonymous reviewers for their valuable comments and suggestions to improve our manuscript.
Declarations
Conflict of interest
The authors declared that they hav no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
We note that \(g_{i,i',s} \ge 0\) for all integers \(i,i',s\), and is nonzero if and only if \(1 \le i,i' \le m, 1 \le s \le \min \{i,i'\}\) and \(i'-s \le n-i\). Then we can rewrite the term \(\sigma _{\textrm{ud}}^2(\mathcal {R}_{m,n},\varepsilon )\) (see Theorem 4) as
where \(g(t,t',\kappa )\) is defined as in (45) and the supremum is taken over positive real numbers \(t,t',\kappa \) satisfying (46). This completes the proof of Lemma 7. \(\square \)
Using Lemma 7, now we provide a detailed proof of Theorem 5.
We start from the function \(g(t,t',\kappa )\) given in (45). It is best to first take the supremum over \(t'\) while fixing t and \(\kappa \). Therefore we differentiate g with respect to \(t'\) while keeping t and \(\kappa \) fixed, where the range of \(t'\) should be taken as \(\kappa \le t' \le \min \{\kappa -t+1,1-R\}\):
Differentiating both sides with respect to t while keeping \(\kappa \) fixed, where the range of t is taken as \(\max \{\kappa ,1-\frac{1-R-\kappa }{\varepsilon }\} \le t \le 1-R\), we have
Solving for \(\frac{\partial G_1}{\partial t}=0\), we get \(t=\frac{\varepsilon +\kappa }{1+\varepsilon }\). We also see that \(G_1\) attains local maximum at this point. We note that \(\kappa \le \frac{\varepsilon +\kappa }{1+\varepsilon }\) as \(\kappa < 1\). In order for the range of t to be nonempty, we also need \(1-\frac{1-R-\kappa }{\varepsilon } \le 1-R\), which is true if and only if \(0 < \kappa \le 1-R-R\varepsilon \) (and this range is nonempty if and only if \(0< R < \frac{1}{1+\varepsilon }\)). We then obtain
Differentiating both sides with respect to \(\kappa \) and checking for \(\mathcal {G}_1'(\kappa )=0\), we have \(\kappa =\frac{q\varepsilon ^2}{1+(q-1)\varepsilon ^2}\). In addition, \(\mathcal {G}_1\) attains local maximum at this critical point.
Then we have the following two possible cases:
1.
\(0 < R \le \frac{1-\varepsilon }{1+(q-1)\varepsilon ^2}\)
In this case \(\frac{q\varepsilon ^2}{1+(q-1)\varepsilon ^2} \le 1-R-R\varepsilon \), and so the maximum is
Differentiating both sides with respect to t while keeping \(\kappa \) fixed, where the range of t is taken as \(\kappa \le t \le \min \{1-\frac{1-R-\kappa }{\varepsilon }, 1-R\}\), we get
Solving for \(\frac{\partial G_2}{\partial t}=0\), we have \(t=R\varepsilon +\kappa \). We also see that \(G_2\) attains local maximum at this point. We already see that \(\kappa \le R\varepsilon +\kappa \). Hence we need to compare the critical point with \(\min \{1-\frac{1-R-\kappa }{\varepsilon }, 1-R\}\). First in order that the range of t is nonempty, we must have \(\kappa \le \min \{1-\frac{1-R-\kappa }{\varepsilon },1-R\}\), which is true if and only if \(1-\frac{R}{1-\varepsilon } \le \kappa \le 1-R\). We can further divide into two subcases:
Case 2a.\(1-\frac{R}{1-\varepsilon } \le \kappa \le 1-R-R\varepsilon \)
In this case we have \(1-\frac{1-R-\kappa }{\varepsilon } \le R\varepsilon +\kappa \le 1-R\). Hence the maximum occurs at \(t=1-\frac{1-R-\kappa }{\varepsilon }\). Note that this value of t is precisely the one in which \(t'=1-R=\varepsilon (1-t)+\kappa \). Therefore it is covered in Case 1 already, and the maximum so obtained cannot be larger than the value calculated in that case. Note that this case can only happen when \(0< R < \frac{1}{1+\varepsilon }\).
Case 2b.\(\max \{0, 1-R-R\varepsilon \} < \kappa \le 1-R\)
In this case we have \(1-R< R\varepsilon +\kappa < 1-\frac{1-R-\kappa }{\varepsilon }\). Hence the maximum occurs at \(t=1-R\).
However under our assumption we require \(\kappa \le 1-R\). It is easy to see that we should then take \(\kappa _-\). This is precisely \(\kappa _0\) given by (43).
inside the logarithm in right hand side of (50) is a strictly decreasing function within our range of \(\kappa \). Hence it suffices to check the value of \(N(\kappa )\) at the two bounds to see whether \(\kappa _0\) is within our range (in particular \(N(\kappa _0)=1\)). It is clear that
\(0 < R \le \frac{1-\varepsilon }{1+(q-1)\varepsilon ^2}\)
In this case we have \(N(1-R-R\varepsilon ) \le 1\), and so \(\mathcal {G}_2'(\kappa ) \le 0\) within the range of \(\kappa \). This shows that the maximum occurs at \(\kappa =1-R-R\varepsilon \). Note that this also implies \(t=1-R=1-\frac{1-R-\kappa }{\varepsilon }\). Since this value is already covered in Case 2a, the maximum cannot be greater than in that case and thereby in Case 1 too.
2.
\(\frac{1-\varepsilon }{1+(q-1)\varepsilon ^2}< R < 1\)
In this case we have \(N(1-R-R\varepsilon ) > 1\), so one of the critical points (in fact the smaller one) is within our range. That number is exactly \(\kappa _0\) defined in (43). Since \(\mathcal {G}_2'(\kappa )\) is decreasing in our range, this implies \(\mathcal {G}_2\) attains maximum at \(\kappa _0\). The maximum will then be
after simplification and applying the relation \(\mathcal {G}_2'(\kappa _0)=0\).
Note that in particular when \(\frac{1-\varepsilon }{1+(q-1)\varepsilon ^2}< R < \frac{1}{1+\varepsilon }\), this value is larger than \(\mathcal {G}_2(1-R-R\varepsilon )=-R\varepsilon -(1-R-R\varepsilon )\log _q\left( \frac{1-R-R\varepsilon }{\varepsilon ^2}\right) -R(1+\varepsilon )\log _q\left( \frac{R}{1-\varepsilon }\right) =\mathcal {G}_1(1-R-R\varepsilon )\).
In this paper we carried out an in-depth study on the average decoding error probabilities of the random parity-check matrix ensemble \(\mathcal {R}_{m,n}\) over the erasure channel under three decoding principles, namely unambiguous decoding, maximum likelihood decoding and list decoding.
(1)
We obtained explicit formulas for the average decoding error probabilities of the ensemble under these three decoding principles and computed the error exponents. We also compare the results with the random \([n,k]_q\) code ensemble studied before.
(2)
For unambiguous decoding, we computed the variance of the decoding error probability of the ensemble and the error exponent of the variance, from which we derived a strong concentration result, that is, under general conditions, the ratio of the decoding error probability of a random code in the ensemble and the average decoding error probability of the ensemble converges to 1 with high probability when the code length goes to infinity.
It might be interesting to extend the results of (2) to general list decoding and maximum likelihood decoding for the ensemble \(\mathcal {R}_{m,n}\). As it turns out, the variance of decoding error probability in these two cases can still be computed, but the expressions are much more complicated and it is difficult to obtain explicit formulas for their error exponents, and hence a concentration result from them. We leave this as an open question for future research.
It may be also interesting to carry out such computation for the random \([n,k]_q\) code ensemble.