3.1 STBC-CQ-PNC mapping rule
We begin our presentation by expanding (
10) as follows:
$$\begin{aligned} w_1&= r_{11} x_1^{(1)} + r_{13} x_1^{(2)} + r_{14} x_2^{(2)} + u_1, \end{aligned}$$
(11)
$$\begin{aligned} w_2&= - r_{11}^* x_2^{(1)} + r_{14}^* x_1^{(2)} - r_{13}^* x_2^{(2)} + u_2, \end{aligned}$$
(12)
$$\begin{aligned} w_3&= r_{33} x_1^{\left( 2 \right) } + u_3, \end{aligned}$$
(13)
$$\begin{aligned} w_4&= - r_{33}^* x_2^{\left( 2 \right) } + u_4. \end{aligned}$$
(14)
Because
\(x_1^{(1)} = x_1^{(1)} \,\oplus \, x_2^{(2)} \,\oplus \, x_2^{(2)}\) and
\(x_2^{(2)} = x_1^{(1)}\, \oplus \, x_2^{(2)} \,\oplus \, x_1^{(1)}\), the relay can encode the pair of symbols
\(x_1^{(1)}\) and
\(x_2^{(2)}\) to obtain the network coded symbol
\(\overline{x_1^{(1)} \oplus x_2^{(2)} } \) as follows. The method to obtain the other symbol
\(\overline{x_2^{(1)} \oplus x_1^{(2)} } \) is similar and thus omitted to simplify our presentation.
Mapping rule for the STBC-CQ-PNC involves the following four steps:
Step 1: Soft estimations of \(x_1^{(2)}\) and \(x_2^{(2)}\).
From the Eqs. (
13) and (
14), soft estimates of
\(x_1^{(2)}\) and
\(x_2^{(2)}\) are given by [
15] :
$$\begin{aligned} \begin{array}{l} {\hat{x}}_1^{(2)} = \mathrm {tanh} \left( {\left( {r_{33}^* w_3 } \right) /\sigma _n^2 } \right) \\ {\hat{x}}_2^{(2)} = \mathrm {tanh} \left( {\left( { - r_{33} w_4 } \right) /\sigma _n^2 } \right) . \end{array} \end{aligned}$$
(15)
Step 2: Channel quantization.
From Eq. (
11), we can see that if
\(x_1^{(1)}\) and
\(x_2^{(2)}\) are considered the desired signals then
\(x_1^{(2)}\) is the interference. Dividing both sides of (
11) by the quantization step
\(r_{11}\) gives us the quantized value
$$\begin{aligned} c_0 = \frac{{w_1 }}{{r_{11} }} = x_1^{(1)} + \frac{{r_{14} }}{{r_{11} }}x_2^{(2)} + \frac{{r_{13} }}{{r_{11} }}x_1^{(2)} + \frac{{u_1 }}{{r_{11} }}. \end{aligned}$$
(16)
Equation (
16) can be rewritten as follows:
$$\begin{aligned} c_0 = x_1^{(1)} + Lx_2^{(2)} + lx_2^{(2)} + \frac{{r_{13} }}{{r_{11} }}\delta _{x_1^{(2)} } + \frac{{u_1 }}{{r_{11} }}, \end{aligned}$$
(17)
where
\(\delta _{x_1^{(2)} } \triangleq x_1^{(2)} - {\hat{x}}_1^{(2)}\) is the interference resulted from estimation error of
\(x_1^{(2)}\). The values of
\(L,\,\,l\) are given by
$$\begin{aligned}&L = L_r + jL_i = \mathrm {round}\left( {\frac{{r_{14} }}{{r_{11} }}} \right) = \mathop {\,\mathrm {arg}\,\mathrm {min} }\limits _{v \in \left( {Z + jZ} \right) } \left\| {\frac{{r_{14} }}{{r_{11} }} - v} \right\| \nonumber \\&l = l_r + jl_i = \frac{{r_{14} }}{{r_{11} }} - L, \end{aligned}$$
(18)
in which
\(j^2 = - 1\) and
\(\mathrm {round}\left( \cdot \right) \) denotes the rounding operator to a nearest complex integer.
Step 3: Removing residual interference.
For soft network coding based on QR decomposition (referred shortly as Soft QR-NC), the estimate of
\({\hat{x}}_2^{(2)}\) in Eq. (
15) is used to cancel all
\(x_2^{(2)}\) in Eq. (
17) to estimate
\(x_1^{(1)}\). If
\(x_2^{(2)}\) can be estimated perfectly, it does not affect the estimate
\(x_1^{(1)}\). Unfortunately, this assumption is difficult to achieve and using an imperfect estimate of
\(x_2^{(2)}\) to estimate
\(x_1^{(1)}\) can result in an error. To avoid this case, we propose to use the estimate of
\(x_2^{(2)}\) to cancel only the residual interference,
i.e.
\(lx_2^{(2)}\) from Eq. (
17). This solution allows reduction in the incurred noise, leading to improved error performance as compared with the traditional Soft QR-NC. After removing residual interference, Eq. (
17) becomes
$$\begin{aligned}&c_1 \triangleq x_1^{(1)} + Lx_2^{(2)} + l\left( x_2^{(2)} - {\hat{x}}_2^{(2)} \right) + \frac{{r_{13} }}{{r_{11} }}\delta _{x_1^{(2)} } + \frac{{u_1 }}{{r_{11} }}\nonumber \\&\quad = x_1^{(1)} + Lx_2^{(2)} + l\delta _{x_2^{(2)} } + \frac{{r_{13} }}{{r_{11} }}\delta _{x_1^{(2)} } + \frac{{u_1 }}{{r_{11} }}, \end{aligned}$$
(19)
where
\(\delta _{x_2^{(2)} } = x_2^{(2)} - {\hat{x}}_2^{(2)}\) is the interference resulted from estimation error of
\(x_2^{(2)}\). Define
\(x_m^{(k)} \triangleq 2s_m^{(k)} - (1 + j)\),
\((m = 1,2)\) where
\(x_m^{(k)} \in \left\{ { \pm 1\,\, \pm j} \right\} \) is the QPSK modulated symbol and
\(s_m^{(k)} \in \left\{ {0,\,1,\,j,\,1 + j} \right\} \) is the un-modulated symbol of
\(x_m^{(k)} \) at terminal node
\(\mathrm {N}_k\). Now similar to [
3]
\(x_m^{(k)} \) in Eq. (
19) is transformed back to its un-modulated symbol as follows:
$$\begin{aligned} {{{\tilde{c}}}_1}&= \frac{1}{2}\left[ {{u_1}\, + \left( {1 + j} \right) \,\, + L\left( {1 + j} \right) } \right] \nonumber \\&=\frac{1}{2}\left[ {\left( {x_1^{(1)} + 1\, + j} \right) + L\left( {x_2^{(2)} + 1 + j} \right) } \right] \nonumber \\&\quad + \frac{l}{2}{\delta _{x_2^{(2)}}} + \frac{{{r_{13}}}}{{2{r_{11}}}}{\delta _{x_1^{(2)}}} + \frac{{{u_1}}}{{2{r_{11}}}}\nonumber \\&= s_1^{(1)} + Ls_2^{(2)} + \frac{l}{2}{\delta _{x_2^{(2)}}} + \frac{{{r_{13}}}}{{2{r_{11}}}}{\delta _{x_1^{(2)}}} + \frac{{{u_1}}}{{2{r_{11}}}}. \end{aligned}$$
(20)
The hard decision of
\(s_1^{(1)} + Ls_2^{(2)}\) can be then obtained as:
$$\begin{aligned} {\hat{c}}_1 = \mathrm {round}\left( {{\tilde{c}}_1 } \right) = s_1^{(1)} + Ls_2^{(2)} + e = s_{1,2} + e, \end{aligned}$$
(21)
where
\(e = \mathrm {round}\left( {\frac{l}{2}\delta _{x_2^{(2)} } + \frac{{r_{13} }}{{2r_{11} }}\delta _{x_1^{(2)} } + \frac{{u_1 }}{{2r_{11} }}} \right) \) is the detection error due to rounding operation and
\(s_{1,2} \triangleq s_1^{(1)} + Ls_2^{(2)} \) is the desired part which needs to be estimated.
Step 4: PNC mapping.
As the hard decision \(\hat{c}_1\) is obtained from \(L_r\) and \(L_i\), the network coded symbol is given for the following 5 cases:
- Case 1: If \(L_r\) is odd and \(L_i\) is even.
In this case, it can be shown that
$$\begin{aligned}&s_{1,2} \,\bmod \,2\nonumber \\&\quad = \left[ {\left( {s_{1,r}^{(1)} + L_r s_{2,r}^{(2)} - L_i s_{2,i}^{(2)} } \right) \,\bmod \,2} \right] \nonumber \\&\qquad + j\left[ {\left( {s_{1,i}^{(1)} + L_r s_{2,i}^{(2)} + L_i s_{2,r}^{(2)} } \right) \,\bmod \,2} \right] \nonumber \\&\quad = \left[ {\left( {s_{1,r}^{(1)} + s_{2,r}^{(2)} } \right) \,\bmod \,2} \right] + j\left[ {\left( {s_{1,i}^{(1)} + s_{2,i}^{(2)} } \right) \,\bmod \,2} \right] \nonumber \\&\quad =s_1^{(1)} \oplus \,s_2^{(2)}, \end{aligned}$$
(22)
where
\(s_{k,r}^{(k)} ,\,s_{k,i}^{(k)}\) represents the real and imaginary parts of
\(s_t^{(k)} \,\left( {t = 1,2} \right) \), respectively. Because
\(x_1^{(1)} \oplus \,x_2^{(2)} = 2( {s_1^{(1)} \oplus \,s_2^{(2)} } ) - ( {1 + j} )\), we can obtain the estimate of the PNC encoded symbol using the decision rule in [
3] as follows:
$$\begin{aligned} \overline{x_1^{(1)} \oplus \,x_2^{(2)} } = 2\left( {{\hat{c}}_1 \,\,\bmod \,\,2} \right) - \left( {1 + j} \right) . \end{aligned}$$
(23)
-
Case 2: If
\(L_r\) is even and
\(L_i\) is odd.
Using [
3], we can show that
$$\begin{aligned} s_{1,2} \,\bmod \,2&= \left[ {\left( {s_{1,r}^{(1)} + L_r s_{2,r}^{(2)} - L_i s_{2,i}^{(2)} } \right) \,\bmod \,2} \right] \nonumber \\&\quad + j\left[ {\left( {s_{1,i}^{(1)} + L_r s_{2,i}^{(2)} + L_i s_{2,r}^{(2)} } \right) \,\bmod \,2} \right] \nonumber \\&= \left[ {\left( {s_{1,r}^{(1)} + s_{2,i}^{(2)} } \right) \,\bmod \,2} \right] \nonumber \\&\quad + j\left[ {\left( {s_{1,i}^{(1)} + s_{2,r}^{(2)} } \right) \,\bmod \,2} \right] \nonumber \\&= s_1^{(1)} \oplus \,\left( {js_2^{(2)} } \right) . \end{aligned}$$
(24)
Because
\(x_1^{(1)} \oplus jx_2^{(2)} = 2\left( {s_1^{(1)} \oplus js_2^{(2)} }\right) - \left( {1 + j} \right) \), we can estimate the PNC encoded symbol as follows:
$$\begin{aligned} \overline{x_1^{(1)} \oplus jx_2^{(2)} } = 2\left( {{\hat{c}}_1 \,\bmod \,2} \right) - \left( {1 + j} \right) . \end{aligned}$$
(25)
-
Case 3: If both
\(L_r\) and
\(L_i\) are even or odd and
\(\left| {L_r } \right| > 1\,\) or
\(\left| {L_i } \right| > 1\,\).
It is easy to realize that
\(\left| L \right| \ge 2\) so the received signal power of
\(x_1^{(1)}\) is less than that of the
\(x_2^{(2)}\). Therefore, the decision rules for this case are given by
$$\begin{aligned} {\hat{x}}_2^{(2)}&= \mathrm {sign}\left( {\frac{{{\mathop {\mathrm{c}}\nolimits } _1 }}{L}} \right) \end{aligned}$$
(26)
$$\begin{aligned} {\hat{x}}_1^{(1)}&= \mathrm {sign}\left( {{\mathop {\mathrm{c}}\nolimits } _0 - \left( {L + l} \right) {\hat{x}}_2^{(2)} } \right) , \end{aligned}$$
(27)
where
$$\begin{aligned} \mathrm {sign}(x) = \left\{ { \begin{array}{*{20}{c}} {\,1\,\,\,\mathrm{{if}}\,\,x \ge 0}\\ { - 1\,\,\,\mathrm{{if}}\,\,x < 0}\,\,. \end{array}} \right. \end{aligned}$$
Finally, we can obtain the PNC encoded symbol as follows:
\(\overline{x_1^{(1)} \oplus \,x_2^{(2)} } = {\hat{x}}_1^{(1)} \oplus {\hat{x}}_2^{(2)}\).
- Case 4: If \(\left| {L_r } \right| = \,\,\left| {L_i } \right| = 1\).
In this case, since
\(x_1^{(1)} + \left( {L_r + jL_i } \right) x_2^{(2)}\) cannot be mapped into
\( {x_1^{(1)} \oplus \,x_2^{(2)} }\) in the QPSK constellation, we need to map it into a 5-QAM constellation
\(\big ( - j3/\sqrt{55\,}, \pm 16/\sqrt{165} - j3\sqrt{55} , \pm 8/\sqrt{165} + j\sqrt{5/11} \big )\) as proposed in [
3]. Detailed mapping rule for this special constellation can be found in [
3,
16].
- Case 5: If \(L_r = L_i = 0\).
From (
11), it is clear that
\(\left| {r_{11} }\right| \geqq \left| {r_{14} } \right| \), so network coding can be performed using the following steps:
1. Hard decisions of \(x_1^{(1)}\) and \(x_2^{(1)}\) in Eq. (
11)
:
$$\begin{aligned} {\hat{x}}_1^{(1)}&= \mathrm {sign}\Big ({\frac{{w_1 - r_{13} {\hat{x}}_1^{(2)} - r_{14} {\hat{x}}_2^{(2)} }}{{r_{11} }}} \Big ) \end{aligned}$$
(28)
$$\begin{aligned} {\hat{x}}_2^{(1)}&= \mathrm {sign}\Big ({\frac{{w_2 - r_{14}^* {\hat{x}}_1^{(2)} + r_{13}^* {\hat{x}}_2^{(2)} }}{{ - r_{11}^* }}} \Big ). \end{aligned}$$
(29)
2. Cancellations of \(x_1^{(1)}\) and \(x_2^{(1)}\) in Eq. (
11)
:
$$\begin{aligned} w_1'&= r_{13} x_1^{(2)} + r_{14} x_2^{(2)} + \left( r_{11} x_1^{(1)} - r_{11} {\hat{x}}_1^{(1)} \right) + u_1 \end{aligned}$$
(30)
$$\begin{aligned} w_2'&= r_{14}^* x_1^{(2)} - r_{13}^* x_2^{(2)} - \left( r_{11}^* x_2^{(1)} - r_{11}^* {\hat{x}}_2^{(1)} \right) + u_2. \end{aligned}$$
(31)
3. Hard decisions of \(x_1^{(2)}\) and \(x_2^{(2)}\)
Since the expressions
\({\mathop {w}\nolimits } _1'\) in (
30) and
\({\mathop {w}\nolimits } _2'\) in (
31) are equivalent with those of a point-to-point Alamouti STBC system, we can use maximal-ratio combining (MRC) to combine the signals in Eqs. (
30), (
31) and (
13) to obtain better estimates of
\({\hat{x}}_2^{(2)}\) and
\({\hat{x}}_1^{(2)}\) as follows:
$$\begin{aligned} {\hat{x}}_2^{(2)}&= \mathrm {sign}\left( {\frac{{r_{14}^* w_1' - r_{13} w_2' - r_{33} w_4 }}{{\left| {r_{13} } \right| ^2 + \left| {r_{14} } \right| ^2 + \left| {r_{33} } \right| ^2 }}} \right) , \end{aligned}$$
(32)
$$\begin{aligned} {\hat{x}}_1^{(2)}&= \mathrm {sign}\left( {\frac{{r_{13}^* w_1' + r_{14} w_2' + r_{33}^* w_3 }}{{\left| {r_{13} } \right| ^2 + \left| {r_{14} } \right| ^2 + \left| {r_{33} } \right| ^2 }}} \right) . \end{aligned}$$
(33)
Finally, from (
28), (
29), (
32) and (
33), we can obtain the PNC encoded symbols as follows:
\(\overline{x_1^{(1)} \oplus \,x_2^{(2)} } = {\hat{x}}_1^{(1)} \oplus {\hat{x}}_2^{(2)}\) and
\(\overline{x_2^{(1)} \oplus \,x_1^{(2)} } = {\hat{x}}_2^{(1)} \oplus {\hat{x}}_1^{(2)}\).
Note that although there is a case in which two types of modulation mappings, i.e. QPSK and 5-QAM, may be used by the system it is still possible for the two terminal nodes to switch to the correct mapping for demodulation. To be clear, let us take the value \(L = {L_r} + j{L_i}\) for explanation. Since the channel quantization based on (18) is used it, the receiver knows five possibilities which may occur for this value. However, there is only one case with \(\left| {{L_i}} \right| = \left| {{L_r}} \right| = 1\) in which the system needs to switch from QPSK to 5-QAM constellation. Under the assumption that the channel is symmetric (the forward and backward channels are identical) and that the receiver has knowledge of the CSI using a suitable channel estimator, each terminal node can certainly estimate the value \(L = {L_r} + j{L_i}\). In case \(\left| {{L_i}} \right| = \left| {{L_r}} \right| = 1\), the two terminal nodes will switch between the two modulation constellations.
3.2 The adaptive STBC-CQ-PNC
From the expression of
\({\mathop {w}\nolimits } _1\) in (
11) we can see that if
\(r_{11} x_1^{(1)} + r_{14} x_2^{(2)}\) or
\((r_{11} x_1^{(1)} + r_{13} x_1^{(2)})\) is the desired signal, then
\(r_{13} x_1^{(2)} + u_1\) or
\(r_{14} x_2^{(2)} + u_1\) is interference plus noise component. Moreover, since the
\(r_{13}\) and
\(r_{14}\) are symmetric in the expression of
\({\mathop {w}\nolimits } _1\) and
\({\mathop {w}\nolimits } _2\) in (
11), at least one suitable pair of PNC symbols from
\(\mathrm{{\{ }}x_1^{(1)} \oplus x_2^{(2)} ,\,\,x_2^{(1)} \oplus x_1^{(2)} \mathrm{{\} }}\) and
\(\mathrm{{\{ }}x_1^{(1)} \oplus x_1^{(2)} ,\,\,x_2^{(1)} \oplus x_2^{(2)} \mathrm{{\} }}\) should be selected to minimize the undesired interference.
Given the pair
\(\big \{x_1^{(1)} \oplus x_2^{(2)} ,\,\,x_2^{(1)} \oplus x_1^{(2)} \big \}\) is selected, it can be seen from (
19) that the residual interference is given by:
\(l\delta _{x_2^{(2)} } + \frac{{r_{13} }}{{r_{11} }}\delta _{x_1^{(2)} } + \frac{{u_1 }}{{r_{11} }}\). The average power of the interference can be given approximately by [
3]
$$\begin{aligned} P_1&\approx E\left\{ {\left| l \right| ^2 } \right\} \frac{{\sigma _n^2 }}{{\sigma _n^2 + r_{33}^2 }} + \frac{{\left| {r_{13} } \right| ^2 }}{{r_{11}^2 }}\frac{{\sigma _n^2 }}{{\sigma _n^2 + r_{33}^2 }} + \frac{{\sigma _n^2 }}{{r_{11}^2 }}\nonumber \\&\approx \frac{{17}}{{44}}\frac{{\sigma _n^2 }}{{\sigma _n^2 + r_{33}^2 }} + \frac{{\left| {r_{13} } \right| ^2 }}{{r_{11}^2 }}\frac{{\sigma _n^2 }}{{\sigma _n^2 + r_{33}^2 }} + \frac{{\sigma _n^2 }}{{r_{11}^2 }}. \end{aligned}$$
(34)
Similarly, when the pair
\(\big \{x_1^{(1)} \oplus x_1^{(2)} ,\,\,x_2^{(1)} \oplus x_2^{(2)} \big \}\) is selected, the residual interference is given by
\(l\delta _{x_1^{(2)} } + \frac{{r_{14} }}{{r_{11} }}\delta _{x_2^{(2)} } + \frac{{u_1 }}{{r_{11} }}\) and its power is approximated by
$$\begin{aligned} P_2&\approx E\left\{ {\left| l \right| ^2 } \right\} \frac{{\sigma _n^2 }}{{\sigma _n^2 + r_{33}^2 }} + \frac{{\left| {r_{14} } \right| ^2 }}{{r_{11}^2 }}\frac{{\sigma _n^2 }}{{\sigma _n^2 + r_{33}^2 }} + \frac{{\sigma _n^2 }}{{r_{11}^2 }}\nonumber \\&\approx \frac{{17}}{{44}}\frac{{\sigma _n^2 }}{{\sigma _n^2 + r_{33}^2 }} + \frac{{\left| {r_{14} } \right| ^2 }}{{r_{11}^2 }}\frac{{\sigma _n^2 }}{{\sigma _n^2 + r_{33}^2 }} + \frac{{\sigma _n^2 }}{{r_{11}^2 }}. \end{aligned}$$
(35)
It can be seen that
\(\mathrm {min} \{ P_1 ,\,P_2 \mathrm{{\} }}\) is equivalent with
\(\mathrm {min} \{ \left| {r_{13} } \right| ,\left| {r_{14} } \right| \mathrm{{\} }}\). Therefore, we propose an adaptive detection rule that satisfies the minimum power
\(P_{\mathrm {min} } = \mathrm {min} \left\{ {P_1 ,P_2 } \right\} \) as follows
Select:
$$\begin{aligned} \left\{ { \begin{array}{*{20}c} {\mathrm{{\{ }}x_1^{(1)} \oplus x_2^{(2)} ,x_2^{(1)} \oplus x_1^{(2)} \mathrm{{\} }}} \\ \\ {\mathrm{{\{ }}x_1^{(1)} \oplus x_1^{(2)} ,x_2^{(1)} \oplus x_2^{(2)} \mathrm{{\} }}} \\ \end{array}} \right. \,\, \begin{array}{*{20}c} {\text {when}\,\,\left| {r_{13} } \right| \le \left| {r_{14} } \right| } \\ \\ {\text {when}\,\,\left| {r_{13} } \right| > \left| {r_{14} } \right| }. \\ \end{array} \end{aligned}$$
(36)