Skip to main content
Erschienen in: EURASIP Journal on Wireless Communications and Networking 1/2019

Open Access 01.12.2019 | Research

An optimized encoding algorithm for systematic polar codes

verfasst von: Xiumin Wang, Zhihong Zhang, Jun Li, Yu Wang, Haiyan Cao, Zhengquan Li, Liang Shan

Erschienen in: EURASIP Journal on Wireless Communications and Networking | Ausgabe 1/2019

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Many different encoding algorithms for systematic polar codes (SPC) have been introduced since SPC was proposed in 2011. However, the number of the computing units of exclusive OR (XOR) has not been optimized yet. According to an iterative property of the generator matrix and particular lower triangular structure of the matrix, we propose an optimized encoding algorithm (OEA) of SPC that can reduce the number of XOR computing units compared with existing non-recursive algorithms. We also prove that this property of the generator matrix could extend to different code lengths and rates of the polar codes. Through the matrix segmentation and transformation, we obtain a submatrix with all zero elements to save computation resources. The proportion of zero elements in the matrix can reach up to 58.5% from the OEA for SPC when the code length and code rate are 2048 and 0.5, respectively. Furthermore, the proposed OEA is beneficial to hardware implementation compared with the existing recursive algorithms in which signals are transmitted bidirectionally.
Hinweise

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Abkürzungen
BEC
Binary erasure channels
NSPC
Non-systematic polar codes
OEA
Optimized encoding algorithm
SPC
Systematic polar codes
XOR
Exclusive Or

1 Introduction

Polar codes proposed by Arikan [1] can theoretically reach the Shannon limit. It has been widely given attention in the communication field because of its low complexity and good decoding performance. As early as 2008, Arikan proposed the concept of channel polarization and carried out a rigorous mathematical proof [2]. For the selection of information bits of polar codes, there are some patterns, such as Bhattacharyya parameters [3] and density evolution [4]. The hardware implementations of the traditional non-systematic polar code (NSPC) encoding were introduced [57]. However, the complexity and computing resources were significantly consumed when fully utilizing the generator matrix of the polar codes to design the hardware structure. Hoyoung et al. [8] proposed partially parallel encoder structure that can reduce latency. However, the resource consumption increased due to the length of the polar codes. Compared to NSPC, the SPC proposed by Arikan [9] outperformed them in the bit error rate. Vangala et al. [10] proposed three encoding algorithms and used the recursion method to find a suitable equilibrium between a memory cell and a computing unit. Chen et al. [11] improved the encoding algorithms designed by Vangala et al. [10] to simplify the storage patterns for SPC. In [12], Sarkis et al. introduced SPC and developed a hardware structure of the decoder. The improved hardware structure increased the throughput of the polar decoder. When the hardware was implemented [13], the input and output ports were exchanged to obtain the SPC encoder based on the NSPC encoder. However, the latency and resource consumption were twice than that of the hardware implementation with NSPC. Wang et al. found a method that utilized the property of parity check matrix to reduce the computational units [1416]. This approach can be utilized for both SPC and NSPC to reduce computational units. By studying the encoding flow graph of SPC, we found out that some of computational units can be omitted. This omission is determined by the encoding structure and the exclusive Or (XOR) operations.
In this paper, we define some variables which are determined by code length, code rate, and the selection for information bits. The two lemmas regarding the transformed generator matrix have been proved. After applying the property of the transformed generator matrix, the proposed algorithm can reduce the number of XOR computing units according to the iterative property of the generator matrix. The iteration property can make the submatrix with a particular lower triangular structure. This submatrix is a part of the original generator matrix and can have all elements with zero value. Besides the zero submatrices obtained from the iteration process, there are still some other elements with zero value in the original matrix. These extra zero elements can also be omitted when computing XOR logical operations.
The rest sections of this paper are organized as follows. The comparisons between SPC and NSPC are introduced in Section 2. The existing encoding algorithms of SPC are reviewed in Section 3. Our simplified process to use XOR operations is also discussed in this section. The distribution rule of information bits, definition of variables, and the lemma proof are shown in Section 4. The case study of our optimized encoding algorithm (OEA) matrix transformation is presented in Section 5. Results and analysis are discussed in Section 6. Conclusions are found in Section 7.

1.1 Methods/Experimental

The research content of this paper is mainly theoretical derivation and analysis, and specific experimental verification will be carried out in future research.

2 Systemic and non-systemic polar codes

Based on the channel polarization theory, the length of polar codes is N = 2n, n ≥ 1, where n is a positive integer. Let A represent the set of the indices of the information bits. The code rate is R = K/N, where K represents the length of information bits and is the number of the elements in a set of A. Information bits selection is determined by Bhattacharyya parameters [17]. Let Ac represent the complement of a set of A. The index set Ac = {0,1,…,N − 1} − A is for the frozen bits, and the length of Ac is N − K. u = (u0, u1, ⋯, uN − 1) represents the message vector, where ui denotes an arbitrary element of the vector of u. x = (x0, x1, ⋯, xN − 1) represents a codeword vector, where xi indicates a random component in the vector of x. The generator matrix GN is defined as
$$ {G}_N={F}^{\otimes n} $$
(1)
where ⊗ denotes Kronecker power operation, n = log2(N), and F denotes two-dimensional matrix F = [1, 0; 1, 1]. Applying the property of Kronecker product, we can construct a generator matrix as
$$ {G}_N=\left[\begin{array}{cc}{F}^{\otimes n-1}& 0\\ {}{F}^{\otimes n-1}& {F}^{\otimes n-1}\end{array}\right]=\left[\begin{array}{cc}{G}_{N/2}& 0\\ {}{G}_{N/2}& {G}_{N/2}\end{array}\right] $$
(2)
The codeword vector of x for NSPC can be represented by the encoding Eq. (1)
$$ x={uG}_N={u}_A{G}_A+{u}_{Ac}{G}_{Ac} $$
(3)
where GA is a submatrix of GN, and it is constructed by the rows of indices in A. GAc is a submatrix of GN and is constructed by the rows of indices in Ac. uA = (ui: iA), uAu and uAc = (uj: jAc), uAcu, and uAc = u − uA. The symbol of ⊕ represents a mod-2 addition or a logical XOR operation in the binary domain.
Arikan [9] first proposed the mathematical formula shown in Eqs. (4) and (5) for SPC:
$$ {x}_A={u}_A{G}_{AA}+{u}_{Ac}{G}_{Ac A} $$
(4)
$$ {x}_{Ac}={u}_A{G}_{AAc}+{u}_{Ac}{G}_{Ac Ac} $$
(5)
where GAA denotes the submatrix of GN consisting of the array of elements (Gi,j) with iA, jA, and GAA = (Gi,j: iA, jA). Similarly for the other submatrices of GAcA = (Gi,j: iAc, jA), GAAc = (Gi,j: iA, jAc), and GAcAc = (Gi,j: iAc, jAc). There is the same denotation for xA = (xi: iA) and xAc = (xj: jAc). If the matrix GAA is invertible and the inputs to SPC encoder are uAc and xA, then the output xAc from the SPC encoder is
$$ {x}_{Ac}=\left({x}_A-{u}_{Ac}{G}_{Ac A}\right){G}_{AA}^{-1}{G}_{AA c}+{u}_{Ac}{G}_{Ac A c} $$
(6)
Consider that the decoding results are not affected by the value of frozen bits [1], we can simplify the encoding procedure by setting zero values to all frozen bits, namely uAc = (ui = 0: iAc). Then, xAc in Eq. (6) can be simplified to
$$ {x}_{Ac}={x}_A{G}_{AA}^{-1}{G}_{AA c} $$
(7)
where GAA−1 is a lower triangular matrix with ones on the diagonal and the submatrix of GAAc also includes a lower triangular matrix. Hence, the matrix product of GAA−1GAAc has the same structure as GAAc. It has a submatrix including a lower triangular matrix. We will use the property of GAA−1 = GAA in the binary domain [9] to prove the Lemma 1 in Section 4 and the matrix transformation in Section 5.

3 Proposed OEA for SPC

The development of the proposed OEA was motivated by [9, 11]. The computational complexity can be decreased by reducing the number of logical XOR computing units. The following example illustrates the procedure of omitting XOR computing units. After logical XOR operations, the output results are the same either from the approach to apply a generator matrix or the method to apply an encoding diagram scheme.
Figure 1 shows the SPC encoding diagram with N = 8 and A = {1,3,5,6,7}. In Fig. 1, the encoding direction is from bottom to top. The encoding process for information bits starts from right to left. Then, the encoding process for frozen bits is from left to right. The gray circles on the rightmost represent the information bits, and the black circles on the leftmost represent the frozen bits. The black arrow on the right represents the computing direction of the information bits. The black arrow on the left represents the computing direction of the frozen bits. Since the value of the frozen bits does not affect the encoding result, we set them to zero. The final outputs of the encoding are the value of the rightmost black circles.
For a SPC encoder in Fig. 1, we can reduce the XOR operation to obtain the output of x0, x2, and x4. From Fig. 1, we can obtain
$$ {x}_0={u}_0\oplus {u}_4\oplus {u}_2\oplus {u}_6\oplus {u}_1\oplus {u}_5\oplus {u}_3\oplus {u}_7 $$
(8)
Consider that u0, u2, and u4 are frozen bits which are set zero. Then, Eq. (8) can be rewritten as
$$ {x}_0={u}_6\oplus {u}_1\oplus {u}_5\oplus {u}_3\oplus {u}_7 $$
(9)
From Fig. 1, we also have the following relations
$$ \left\{\begin{array}{l}{u}_7={x}_7,\\ {}{u}_6={x}_6\oplus {x}_7,\\ {}{u}_5={x}_5\oplus {x}_7,\\ {}{u}_3={x}_3\oplus {x}_7,\\ {}{u}_1={x}_1\oplus {x}_3\oplus {x}_5\oplus {x}_7\end{array}\right. $$
(10)
Substitute Eq. (10) into Eq. (9), x0 becomes
$$ {x}_0=\left({x}_6\oplus {x}_7\right)\oplus \left({x}_1\oplus {x}_3\oplus {x}_5\oplus {x}_7\right)\oplus \left({x}_5\oplus {x}_7\right)\oplus \left({x}_3\oplus {x}_7\right)\oplus {x}_7 $$
(11)
After applying logical XOR operations, the output results become zero if the input values are the same. Thus, Eq. (11) can be rewritten as
$$ {x}_0={x}_6\oplus {x}_1\oplus {x}_7 $$
(12)
Similarly, x2 and x4 can be derived
$$ {x}_2={u}_2\oplus {u}_6\oplus {u}_3\oplus {u}_7={u}_6\oplus {u}_3\oplus {u}_7=\left({x}_6\oplus {x}_7\right)\oplus \left({x}_3\oplus {x}_7\right)\oplus {x}_7={x}_6\oplus {x}_3\oplus {x}_7 $$
(13)
$$ {x}_4={u}_4\oplus {u}_6\oplus {u}_5\oplus {u}_7={u}_6\oplus {u}_5\oplus {u}_7=\left({x}_6\oplus {x}_7\right)\oplus \left({x}_5\oplus {x}_7\right)\oplus {x}_7={x}_6\oplus {x}_5\oplus {x}_7 $$
(14)
By combining Eqs. (12), (13), and (14), we simplify the encoding diagram illustrated in Fig. 1 to the diagram in Fig. 2.
Compared with the computation complexity of original algorithm [11], the proposed OEA reduces the computational units without increasing the memory bits. Figures 1 and 2 show the difference of the number of computing units. There are only four XOR computing units to be reduced in Fig. 1. However, 12 XOR operations are omitted in Fig. 2. u1, u3, u5, u6, and u7 are intermediate variables that can be ignored. The gray nodes on the rightmost represent the information bits, and the black nodes on the leftmost represent the frozen bits. The right black nodes are unknown. The outputs of the SPC encoder are x0, x2, and x4.
The outputs of the SPC encoder also can be generated from a two-dimensional generator matrix. For example, we construct a generator matrix with the polar code length N = 8
$$ {G}_8=\left[\begin{array}{cccccccc}1& 0& 0& 0& 0& 0& 0& 0\\ {}1& 1& 0& 0& 0& 0& 0& 0\\ {}1& 0& 1& 0& 0& 0& 0& 0\\ {}1& 1& 1& 1& 0& 0& 0& 0\\ {}1& 0& 0& 0& 1& 0& 0& 0\\ {}1& 1& 0& 0& 1& 1& 0& 0\\ {}1& 0& 1& 0& 1& 0& 1& 0\\ {}1& 1& 1& 1& 1& 1& 1& 1\end{array}\right] $$
(15)
The codeword vector x can be obtained from the matrix product of u (Eq. (3)) and above G8 (Eq. (15)). For example, we can have the element x0 (Eq. (16)) by multiplying u (Eq. (3)) with G8 (Eq. (15))
$$ {x}_0={u}_0\oplus {u}_1\oplus {u}_2\oplus {u}_3\oplus {u}_4\oplus {u}_5\oplus {u}_6\oplus {u}_7 $$
(16)
Comparing x0 in Eq. (16) to x0 in Eq. (8), we found that the encoding result is the same. One is from an SPC encoding diagram (Eq. (8)), and another one is from a generator matrix approach (Eq. (16)). The proposed OEA utilizes the characteristics of the generator matrix and the matrix transformation. For a general case with the code length N, Section 4 shows that the outputs from a generator matrix approach are the same as the outputs from a diagram approach. For example, when N = 8 in Eq. (7), A = {1,3,5,6,7}, and GAA−1GAAc = [1, 0, 0; 0, 1, 0; 0, 0, 1; 1, 1, 1; 1, 1, 1]. xAc has the elements of x0, x2, and x4 as outputs. These outputs are the same as the results from Eqs. (12), (13), and (14). Therefore, the characteristics of the generator matrix can be considered to use in the process of our encoding optimization algorithm for a general case.

4 The theory of OEA

Before discussing the characteristics of the generator matrix, we first divide the distribution of information bits [15] into two groups based on the preset value. Then, we divide the distribution of information bits into two areas based on the bit channel index. Figures 3, 4, 5, and 6 show the capacity of binary erasure channels (BEC) when the code length N is 128, 256, 1024, and 2048, respectively. We can map the bit channel index to the set of A and the set of Ac according to the capacity value larger than or smaller than the preset value. For the areas of all bit channel index in A or all bit channel index in Ac, we define them as non-hybrid areas. For the area of the bit channel index belonging to both A and Ac, we define it as a hybrid area. For example, in Fig. 3, we can set up 0.7 as a preset capacity value. When the capacity value is larger than 0.7, the bit channel index belongs to the set of A. When the capacity value is smaller than 0.7, the bit channel index belongs to the set of Ac. For the set of A, we select the lowest index in the area with capacity value larger than 0.7. We draw a line 1 across the lowest index and denote the left side of line 1 as a non-hybrid area. Similarly, for the set of Ac, we select the highest index in Ac to the area with capacity value smaller than 0.7. We draw a line 2 across the highest index. We call the right side of line 2 a non-hybrid area. We denote the area between the line 1 and the line 2 as a hybrid area. There are both frozen bits and information bits distributed in this hybrid area. For Figs. 4, 5, and 6, we can use the same approach to divide the areas into a hybrid or a non-hybrid area and map the distribution of information bits into the set of A and Ac.
The following variables are defined to describe the highest index value in Ac, the lowest index value in A, the number of information bits, and the number of frozen bits in the hybrid and non-hybrid areas. The defined variables are listed in the Table 1.
Table 1
Defined variables
Variable
Definition
p fi
The first information bit index
p lf
The last frozen bit index
Δ
The width of the hybrid area
Δi
The number of information bits
in hybrid area
Δf
The number of frozen bits
in hybrid area
For the non-hybrid area, the index value of pfi represents the number of the frozen bits, and N − plf 1 represents the number of the information bits, where N has been defined as the code length previously. The value of N is the total number of all frozen bits and information bits. The width of the hybrid area in the generator matrix can be represented byΔ=plf − pfi + 1. The value of Δ also represents the sum of the frozen bits and the information bits in this hybrid area. Figure 7 shows these defined variables of pfi, Δ, N − plf 1 when the code length N is 16.
In Section 2, we have denoted GN as a lower triangular with all ones across the diagonal and GN is invertible. In binary GF (2), the invertible matrix is equal to itself, GN−1 = GN. Gi,j represents an element in the matrix of GN. i, jN, where N = {0,1,...,N − 1}. When i = j, Gi,j = 1; when i < j, Gi,j = 0.
We have following discrete function definitions. These defined functions are used to show the lower triangle structure property of the generator matrix GN and its submatrices of GAA and GAAc.
Discrete function definition:
fN(x) = x, where a discrete variable x of the function is xN. The discrete function is N, fN(x)⊂N.
fA(x) = A(x), where a discrete variable x of the function is x ∈{1, 2, ..., K}, K is the length of information bits. The discrete function is A(x), fA(x)⊂A, and fA(1) = A(1) = pfi. fA(x) is a monotone increasing function.
fc(x) = Ac(x), where the discrete variable x of the function is x ∈{1, 2, ..., N − K}, N is the code length and N − K is the length of frozen bits. The discrete function is Ac(x), fc(x)⊂Ac, and fc(N − K) = Ac(N − K). fc(x) is a monotone increasing function.
Since both matrix GN and GAA are square matrices, for the defined function fN(x) = x, in which x represents the xth row of GN as well as the xth column of GN, fN(x) can represent the fN(x)th row of GN as well as the fN(x)th column of GN. For fA(x) = A(x), x represents the xth row of GAA and GAAc, as well as xth column of GAA. fA(x) represents A(x)th row in GN as well as A(x)th column of GN. For fc(x) = Ac(x), x represents the xth column of GAAc and fc(x) represents the Ac(x)th row of GN as well as Ac(x)th column of GN.
Lemma 1: GAA is a lower triangular with all ones across the diagonal.
Proof: For GAA, when x = y, x,y∈{1,2,... ,K}, fA(x) = fA(y). So fN(fA(x)) = fN(fA(y)). Since fA(x),fA(y)∈AN, and GfA(x),fA(y) = 1, (GAA)x,y = 1. When x < y, x,y∈{1,2,...,K}, due fA(x) is a monotone increasing discrete function, so fA(x) < fA(y). Hence fN(fA(x)) < fN(fA(y)). Because of fA(x), fA(y)∈AN, we obtain that GfA(x),fA(y) = 0. Hence, (GAA)x,y = 0. Given the property that the inverted matrix has the lower triangular structure if the original one has the lower triangular matrix, GAA is a lower triangular; therefore, the inverted GAA−1 has a lower triangular matrix.
Lemma 2: A submatrix of GAAc has the lower triangular structure.
Proof: For fA(x1), when x1∈{1,2,...,Δi}, fA(x1)∈{A(1), A(2), ..., Ai)}, fA(1) = A(1) = pfi. For fc(x2), in which x2∈{N − K − Δf, N − K − Δf+1, ... ,N-K}, fc(x2)∈{Ac(N − K − Δf), Ac(N − K − Δf+1),..., Ac(N − K)}. fc(N − K) = Ac(N − K) = plf. In the hybrid area, we know that A(1) < Ac(N − K − Δf) and Ai) < Ac(N − K). As we know from the above analysis, each information bit index is smaller than the previous frozen bit index. Due to fA(x) and fc(x) are monotone increasing discrete functions, so fA(x1) < fc(x2), GfA(x1),fc(x2) = 0, and (GAAc)x1,x2 = 0. The property of a submatrix GAAc with lower triangular structure exists if fA(x1) < fc(x2), where x1 and x2 are in different discrete sets, x1∈{1,2,...,Δi} and x2∈{N − K − Δf, N − K − Δf+1, ... , N − K}.

5 A case study of OEA

For the case of code length of N = 16, suppose the code rate is 1/2, we will have K = 8, A = {3, 5, 7, 9, 11, 13, 14, 15}, and Ac = {0, 1, 2, 4, 6, 8, 10, 12}. For the generator matrix of G16 in Fig. 8a, the rows of information bits indices can be extracted to form (G16)A shown in Fig. 8a, b, c1, and c2 which illustrate the detailed procedures of the matrix transformation.
The row elements in the solid line box in Fig. 8a form a new matrix (G16)A shown in Fig. 8b. In Fig. 8b, fA(1) = A(1) = 3 and fc(N − K) = Ac(N − K) = 12, so the third column represents the first information bit pfi and the 12th column represents the last frozen bit plf . We can form (G16)AA in Fig. 8c2 by extracting the columns from the dashed box in Fig. 8b. To form (G16)AAc shown in Fig. 8c1, we can use the remaining columns from the Fig. 8b. By the Lemma 1, (G16)AA is a lower triangular matrix with all ones across the diagonal, (G16)AA−1 = (G16)AA, GAA−1 is a lower triangular matrix. By the Lemma 2, (G16)AAc has a lower triangular structure. For the general case of N = 2n, n ≥ 1, the dimension of the lower triangular matrix in (G16)AAc is NKpfi + 1. The dimension of (G16)AAc is NK, minus the first part of the matrix, so the remaining matrix size is d = NKpfi + 1. GAA−1 is the inverse of itself, and it is a lower triangular matrix with the dimension of K.
The matrix (G16)AA−1and (G16)AAc in Fig. 9a and b are partitioned to obtain a block of a zero submatrix. To further discuss about general case, Fig. 9a can be partitioned and written as several submatrices of g1, g2, g3, and g4 in Fig. 10a. The submatrix g2 is a zero matrix. Figure 9b can be partitioned into submatrices of c1, c2, c3, and c4 in Fig. 10b. The submatrix c2 is a zero matrix.

6 The block matrix in Fig. 10a and b can be written as Eqs. (17) and (18), respectively

$$ {G}_{AA}^{-1}=\left(\begin{array}{l}{g}_1\kern0.5em {g}_2\\ {}{g}_3\kern0.5em {g}_4\end{array}\right) $$
(17)
$$ {G}_{AAc}=\left(\begin{array}{l}{c}_1\kern0.5em {c}_2\\ {}{c}_3\kern0.5em {c}_4\end{array}\right) $$
(18)
When d is even, the sizes of the submatrices of g1, g2, g3, and g4 are \( \frac{d}{2}\times \frac{d}{2} \),\( \frac{d}{2}\times \left(K-\frac{d}{2}\right) \),\( \left(K-\frac{d}{2}\right)\times \frac{d}{2} \), and \( \left(K-\frac{d}{2}\right)\times \left(K-\frac{d}{2}\right) \), respectively. And the sizes of submatrices of c1, c2, c3, and c4 are \( \frac{d}{2}\times \left(N-K-\frac{d}{2}\right) \),\( \frac{d}{2}\times \frac{d}{2} \),\( \left(K-\frac{d}{2}\right)\times \left(N-K-\frac{d}{2}\right) \), and \( \left(K-\frac{d}{2}\right)\times \frac{d}{2} \), respectively. Due to the zero property of g2 and c2, Eq. (7) can be rewritten as
$$ {x}_{Ac}={x}_A{G}_{AA}^{-1}{G}_{AA c}={x}_A\left(\begin{array}{cc}{g}_1{c}_1& 0\\ {}{g}_3{c}_1+{g}_4{c}_3& {g}_4{c}_4\end{array}\right) $$
(19)
For Eq. (19), when d is even, the dimension of the zero submatrix in GAA−1GAAc is \( \frac{d}{2}\times \frac{d}{2} \); when d is odd, the size of the zero submatrix in GAA−1GAAc is \( \frac{d-1}{2}\times \frac{d-1}{2} \). The computing procedures are omitted when multiplying such zero submatrix during an encoding process. Therefore, we can save the computing resources after applying the proposed OEA. The OEA is universal for general cases. The pseudocodes of our OEA are listed in the algorithm for the proposed OEA. We can clearly understand the characteristic of the proposed algorithm and the difference between the encoding algorithm in [11].

7 Result analysis

Table 2 discusses the dimension d of lower triangular submatrix of GAAc at different code lengths of N and different code rates of R, where d = N − K − pfi + 1. Detailed data of three different code lengths and three different code rates for each of them are listed in this table.
Table 2
Detailed data of d
N
R
K
p fi
d
1024
0.5
512
16
497
0.75
768
4
253
0.9
922
4
99
2048
0.5
1024
16
1009
0.75
1536
8
505
0.9
1843
4
202
4096
0.5
2048
32
2017
0.75
3072
8
1017
0.9
3686
1
410
Figure 11 shows the ordinate in a linear plot. The relation between d and R is in a linear relationship for each of code length N. The value changes of pfi can be negligible to a code length.
Figure 12 shows the variation trend of d with the code length N. At different code rates (for example, R = 0.5, 0.75, and 0.9), d increases when the code length N becomes short. Especially, when R = K/N = 0.5, the growth trend is faster than others. In other words, the smaller the code rate is, the greater the value of d will be.
After applying the proposed OEA to the transformed matrix, the percentages of zero elements of the lower triangular part are shown in Table 3. For SPC and NSPC, the percentages of zero elements shown in Table 3 are calculated by Eqs. (20) and (21), respectively.
$$ {P}_{OEA- NSPC}=\frac{\left({d}^2-d\right)/2}{N^2} $$
(20)
$$ {P}_{OEA- SPC}=\frac{\left({d}^2-d\right)/2}{N\times \left(N-K\right)} $$
(21)
Table 3
Percentages of zero elements in the transformed matrix for our OEA algorithm
R
N = 210 = 1024
N = 211 = 2048
N = 212 = 4096
OEA
NSPC (%)
SPC (%)
NSPC (%)
SPC (%)
NSPC (%)
SPC (%)
0.5
11.8
47.0
10.9
58.5
12.3
49.2
0.75
3.0
16.2
3.0
16.2
3.1
16.4
0.9
0.47
5.2
0.48
5.4
0.49
5.5
For NSPC, 0.47% of the computing resources can be saved when N = 1024 and R = 0.9. However, when N = 1024 and R = 0.5, only 11.8% of the computing resources can be saved. For SPC, 58.5% of the resource consumption can be saved when code rate R = 0.5 and the code length N = 2048. When the code rate increases, the percentage of zero elements in the lower triangle decreases. However, the percentage is almost invariable at different code lengths for SPC and NSPC at the same code rate.
The comparison of systematic polar encoders is shown in Table 4. Unlike [10, 11], we use the total times of XORs to represent the computational complexity in this paper. For the Encoder A in [10] and the SPC in [11], the times of XORs can be approximated as \( \left({N}^2+N\right)\left({n}^2+n\right)/2>\frac{n^2}{2}\cdot {N}^2>{N}^2 \) and \( N\left({N}^2+N\right)/2\ge \frac{N^3}{2} \), respectively. However, for the proposed OEA, the computational complexity can be decreased to NK due to NK − K2 − N + K = (N − K)(K − 1) < NK. Therefore, the computational complexity of the OEA is lower than that of Encoder A in [10] and SPC in [11]. As for the Encoder B, Encoder C, and NSPC in [10], the times of XORs can be approximated as N(1 + log2N) > o(N), N(1 + 2log2N) > o(N), and \( \frac{N}{2}{\log}_2N>o(N) \), respectively. Compared with the recursive algorithms Encoder B, Encoder C, and NSPC in [10], the advantage of the times of XORs of the proposed OEA is not obvious, but it is beneficial to hardware implementation. For the proposed OEA, the operation of the matrix segmentation and transformation can be completed in the preprocessing stage, followed by multiplication of matrix and vector, which can be realized by XOR gate. There is no reverse transmission process of signal, which is beneficial to the timing and pipeline design of the hardware. The Encoder B, Encoder C, and NSPC in [10] are implemented by the recursive algorithm based on divide-and-conquer method in the software design, and the signal is passed from the first level to the last level, and then passed back to the first level. This process of bidirectional transmission of signals is not conducive to hardware implementation, especially in high-speed pipeline structures.
Table 4
Comparison of systematic polar encoders
Algorithm
Recursion
Memory
XORs (times)
Hardware implementation
Encoder A [10]
No
N(1 + log2N)
(N2 + N)(n2 + n)/2
Yes
Encoder B [10]
Yes
2 N− 1
N(1 + log2N)
No
Encoder C [10]
Yes
N
N(1 + 2log2N)
No
NSPC [10]
Yes/No
2 N
(Nlog2N)/2
No
SPC [11]
No
N
N(N2 + N)/2
Yes
Our OEA
No
N
NK − K 2 − N + K
Yes

8 Conclusions and prospect

In this paper, we propose an optimization algorithm OEA for SPC. The number of zero elements in the transformed generator matrix and their locations can be determined in the proposed OEA. In the case of code rate reaching 0.5, half of the number of XOR computing units can be omitted to save computation resources due to the large number of zero elements found in the submatrix. For other code rates higher than 0.5, a smaller number of the computation units is saved. The proposed OEA not only reduces the number of XOR computing units compared with the existing non-recursive algorithms, but also is beneficial to hardware implementation compared with the existing recursive algorithms.

Acknowledgements

The authors are grateful for the valuable comments and suggestions provided by the reviewers.

Competing interests

The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat E. Arikan, Channel Polarization: A method for constructing capacity achieving codes for symmetric Serbian-input memoryless channels. IEEE Trans. Inf. Theory 55, 3051–3073 (2009) E. Arikan, Channel Polarization: A method for constructing capacity achieving codes for symmetric Serbian-input memoryless channels. IEEE Trans. Inf. Theory 55, 3051–3073 (2009)
2.
Zurück zum Zitat E. Arikan, Channel polarization: A method for constructing capacity-achieving codes, in IEEE International Symposium on Information Theory (Toronto), pp. 1173–1177 (2008). E. Arikan, Channel polarization: A method for constructing capacity-achieving codes, in IEEE International Symposium on Information Theory (Toronto), pp. 1173–1177 (2008).
3.
Zurück zum Zitat S. Zhao, P. Shi and B. Wang, Designs of Bhattacharyya Parameter in the Construction of Polar Codes, in 7th International Conference on Wireless Communications, Networking and Mobile Computing (Wuhan), pp. 1–4 (2011). S. Zhao, P. Shi and B. Wang, Designs of Bhattacharyya Parameter in the Construction of Polar Codes, in 7th International Conference on Wireless Communications, Networking and Mobile Computing (Wuhan), pp. 1–4 (2011).
4.
Zurück zum Zitat R. Mori, T. Tanaka, Performance of polar codes with the construction using density evolution, IEEE Commun. Lett. vol. 13, no. 7, pp. 519–521 (2009)CrossRef R. Mori, T. Tanaka, Performance of polar codes with the construction using density evolution, IEEE Commun. Lett. vol. 13, no. 7, pp. 519–521 (2009)CrossRef
5.
Zurück zum Zitat A. Arpure, S. Gugulothu, FPGA implementation of polar codes based encoder architecture, in International Conference on Communication and Signal Processing (India), April 6-8, pp. 0691–0695 (2016) A. Arpure, S. Gugulothu, FPGA implementation of polar codes based encoder architecture, in International Conference on Communication and Signal Processing (India), April 6-8, pp. 0691–0695 (2016)
6.
Zurück zum Zitat X. Shih, P. Huang and Y. Chen, High-speed low-area-cost VLSI design of polar codes encoder architecture using radix-k processing engines, in IEEE 5th Global Conference on Consumer Electronics (Kyoto), pp. 1–2 (2016) X. Shih, P. Huang and Y. Chen, High-speed low-area-cost VLSI design of polar codes encoder architecture using radix-k processing engines, in IEEE 5th Global Conference on Consumer Electronics (Kyoto), pp. 1–2 (2016)
7.
Zurück zum Zitat C. Zhang, J. Yang, X. You and S. Xu, Pipelined implementations of polar encoder and feed-back part for SC polar decoder, in IEEE International Symposium on Circuits and Systems (Lisbon), pp. 3032–3035 (2015) C. Zhang, J. Yang, X. You and S. Xu, Pipelined implementations of polar encoder and feed-back part for SC polar decoder, in IEEE International Symposium on Circuits and Systems (Lisbon), pp. 3032–3035 (2015)
8.
Zurück zum Zitat H. Yoo, I.-C. Park, Partially parallel encoder architecture for long polar codes. IEEE Transactions on Circuits and Systems II: Express Briefs 62(3), 306–310 (2015)CrossRef H. Yoo, I.-C. Park, Partially parallel encoder architecture for long polar codes. IEEE Transactions on Circuits and Systems II: Express Briefs 62(3), 306–310 (2015)CrossRef
9.
Zurück zum Zitat E. Arikan, Systematic polar coding. IEEE Commun Lett 15(8), 860–862 (2011)CrossRef E. Arikan, Systematic polar coding. IEEE Commun Lett 15(8), 860–862 (2011)CrossRef
10.
Zurück zum Zitat H. Vangala, Y. Hong, E. Viterbo, Efficient algorithms for systematic polar encoding. IEEE Commun Lett 20(1), 17–20 (2016)CrossRef H. Vangala, Y. Hong, E. Viterbo, Efficient algorithms for systematic polar encoding. IEEE Commun Lett 20(1), 17–20 (2016)CrossRef
11.
Zurück zum Zitat G. TaiChen, Z. Zhaoyang, Z. Caijun, Z. Liang, A low complexity encoding algorithm for systematic polar codes. IEEE Commun Lett 20(7), 1277–1280 (2016) G. TaiChen, Z. Zhaoyang, Z. Caijun, Z. Liang, A low complexity encoding algorithm for systematic polar codes. IEEE Commun Lett 20(7), 1277–1280 (2016)
12.
Zurück zum Zitat G. Sarkis, P. Giard, A. Vardy, C. Thibeault, W.J. Gross, Fast polar decoders: Algorithm and implementation. IEEE J Sel Areas Commun 32(5), 946–957 (2014)CrossRef G. Sarkis, P. Giard, A. Vardy, C. Thibeault, W.J. Gross, Fast polar decoders: Algorithm and implementation. IEEE J Sel Areas Commun 32(5), 946–957 (2014)CrossRef
13.
Zurück zum Zitat G. Sarkis, I. Tal, P. Giard, A. Vardy, Flexible and low-complexity encoding and decoding of systematic polar codes. IEEE Trans Commun 64(7), 2732–2745 (2016)CrossRef G. Sarkis, I. Tal, P. Giard, A. Vardy, Flexible and low-complexity encoding and decoding of systematic polar codes. IEEE Trans Commun 64(7), 2732–2745 (2016)CrossRef
14.
Zurück zum Zitat X. Wang, T. Ge, J. Li, C. Su, F. Hong, Efficient multi-rate encoder of QC-LDPC codes based on FPGA for WIMAX standard. Chinese Journal of Electronics 26(2), 250–255 (2017)CrossRef X. Wang, T. Ge, J. Li, C. Su, F. Hong, Efficient multi-rate encoder of QC-LDPC codes based on FPGA for WIMAX standard. Chinese Journal of Electronics 26(2), 250–255 (2017)CrossRef
15.
Zurück zum Zitat X. Wang, W. Cao, J. Li, et al., Improved min-sum algorithm based on density evolution for low-density parity check codes. IET communications 11(10), 1582–1586 (2017)CrossRef X. Wang, W. Cao, J. Li, et al., Improved min-sum algorithm based on density evolution for low-density parity check codes. IET communications 11(10), 1582–1586 (2017)CrossRef
16.
Zurück zum Zitat X. Wang, J. Li, Y. Wang, et al., Design of dual-mode decoder based on LDPC/turbo code. IET communications 11(8), 1325–1329 (2017)MathSciNetCrossRef X. Wang, J. Li, Y. Wang, et al., Design of dual-mode decoder based on LDPC/turbo code. IET communications 11(8), 1325–1329 (2017)MathSciNetCrossRef
17.
Zurück zum Zitat H. Vangala, E. Viterbo, Y. Hong, A comparative study of polar code constructions for the AWGN channel. Mathematics 1, 1–7 (2015)CrossRef H. Vangala, E. Viterbo, Y. Hong, A comparative study of polar code constructions for the AWGN channel. Mathematics 1, 1–7 (2015)CrossRef
Metadaten
Titel
An optimized encoding algorithm for systematic polar codes
verfasst von
Xiumin Wang
Zhihong Zhang
Jun Li
Yu Wang
Haiyan Cao
Zhengquan Li
Liang Shan
Publikationsdatum
01.12.2019
Verlag
Springer International Publishing
DOI
https://doi.org/10.1186/s13638-019-1491-4

Weitere Artikel der Ausgabe 1/2019

EURASIP Journal on Wireless Communications and Networking 1/2019 Zur Ausgabe