Let
\(H=\mathbb{R}^{m}\). Define
\(h_{n}(x)=\frac{1}{100}x\). Take
\(f(x)=\frac{1}{2}\|Ax-b\|^{2}\), thus we can get that
\(\nabla f(x)=A^{T}(Ax-b)\) with Lipschitz coefficient
\(L=\|A^{T}A\|\), where
\(A^{T}\) represents the transpose of
A. Take
\(g=\|x\|_{1}\), then
\(V_{\lambda _{n}g}x=\arg\min_{v\in H}\{\lambda_{n} g(v)+\frac{1}{2}\|v-x\|^{2}\}=\arg\min_{v\in H}\{\lambda\|v\|_{1}+\frac{1}{2}\|v-x\|^{2}\}\). From [
20], we also know that
\(\operatorname{prox}_{\lambda_{n}\|\cdot\|_{1}}x=[\operatorname{prox}_{\lambda_{n}|\cdot |}x_{1}, \operatorname{prox}_{\lambda_{n}|\cdot|}x_{2},\ldots, \operatorname{prox}_{\lambda_{n}|\cdot |}x_{m}]^{T}\), where
\(\operatorname{prox}_{\lambda_{n}|\cdot|}x_{i}=\max\{|x_{i}|-\lambda_{n},0\} \operatorname{sign}(x_{i})\) (
\(i=1,2,\ldots,m\)). Give
\(\alpha_{n}=\frac{1}{1\text{,}000*n}\) for every
\(n\geq1\). Fix
\(\lambda_{n}=\frac{1}{150*L^{\frac{1}{2}}}\), generate a random matrix
$$ A= \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} -4.3814 & -4.0867 & 9.8355 & -2.9770 & 11.4368 \\ -5.3162 & 9.7257 & -5.2225 & 1.7658 & 9.7074 \\ -4.1397 & -4.3827 & 20.0339 & 9.5099 & -4.3200 \\ 6.4894 & -3.6008 & 7.0589 & 14.1585 & -16.0452 \\ 10.2885 & 14.5797 & 0.4747 & 17.4626 & 1.5539 \\ -12.3712 & -21.9349 & -3.3341 & 7.1354 & 3.1741 \\ 4.1361 & -5.7709 & 1.4400 & -16.3867 & -7.6009 \\ -8.1879 & 5.1973 & -0.1416 & -11.5553 & -0.0952 \\ -6.8981 & -6.6670 & 8.6415 & 1.1342 & 3.9836 \\ 8.8397 & 1.8026 & 5.5085 & 6.8296 & 11.7061 \\ 4.7586 & 14.1223 & 0.2261 & -0.4787 & 17.0133 \\ -5.0971 & -0.0285 & 9.1987 & 1.4981 & 14.0493 \\ 10.3412 & 2.9157 & -7.7770 & 5.6670 & -13.8262 \\ 2.4447 & 8.0844 & 2.1304 & 8.7968 & 20.3888 \\ 9.2393 & 2.6692 & 6.4166 & 4.2549 & -13.1472 \\ -4.1641 & 12.2469 & -0.4358 & 5.8242 & -10.0650 \\ 0.6452 & 6.0029 & -13.6151 & 3.4759 & -1.8184 \end{array}\displaystyle \right )^{T} $$
(4.1)
and a random vector
$$ b= ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 189.0722 & 42.6524 & 635.1979 & 281.8669 & 538.5967 \end{array}\displaystyle )^{T}. $$
(4.2)
Then, by Theorem
3.1, the sequence
\(\{x_{n}\}\) is generated by
$$ x_{n+1}=\frac{1}{1\text{,}000*n}*\frac{1}{100}x_{n}+ \biggl(1-\frac {1}{1\text{,}000*n}\biggr)\operatorname{prox}_{\lambda\|\cdot\|_{1}} \bigl(x_{n}-A^{T}(Ax_{n}-b)\bigr). $$
(4.3)
As
\(n\rightarrow\infty\), we have
\(\{x_{n}\}\rightarrow x^{*}\). Then, through taking a distinct initial guess
\(x_{0}\), using software Matlab, we obtain the numerical experiment results in Tables
1 and
2, where
$$\begin{aligned}& x_{56}= \left ( \textstyle\begin{array}{@{}c@{}} 7.1006\\ -2.9422\\ 8.8367\\ 2.8348\\ 4.5842\\ -3.9470\\ 0.0197\\ -4.4153\\ 3.6363\\ 10.2320\\ 5.8671\\ 7.0921\\ -3.9847\\ 7.8388\\ 3.5086\\ -5.5774\\ -8.0655 \end{array}\displaystyle \right ),\qquad x_{157}= \left ( \textstyle\begin{array}{@{}c@{}} 7.1148\\ -3.0244\\ 8.7745\\ 2.8338\\ 4.5735\\ -3.9616\\ 0.1036\\ -4.5064\\ 3.5960\\ 10.3482\\ 5.8919\\ 7.0715\\ -3.9318\\ 7.8632\\ 3.5382\\ -5.7408\\ -8.1055 \end{array}\displaystyle \right ),\qquad x_{223\text{,}462}= \left ( \textstyle\begin{array}{@{}c@{}} 4.2108\\ 0\\ 11.8683\\ 0\\ 0\\ -2.0724\\ 0\\ 0\\ 0\\ 24.9734\\ 2.5163\\ 2.4147\\ 0\\ 7.6589\\ 0\\ 0\\ -12.6599 \end{array}\displaystyle \right ), \end{aligned}$$
(4.4)
$$\begin{aligned}& x_{57}= \left ( \textstyle\begin{array}{@{}c@{}} 7.8596\\ -2.2018\\ 8.8707\\ 3.3315\\ 4.7086\\ -2.7415\\ 1.7903\\ -3.2518\\ 4.3445\\ 10.8322\\ 6.5156\\ 7.5856\\ -2.7904\\ 8.1903\\ 4.2573\\ -5.1121\\ -6.8677 \end{array}\displaystyle \right ),\qquad x_{154}= \left ( \textstyle\begin{array}{@{}c@{}} 7.8725\\ -2.2857\\ 8.8060\\ 3.3304\\ 4.6981\\ -2.7551\\ 1.8729\\ -3.3467\\ 4.3031\\ 10.9491\\ 6.5397\\ 7.5637\\ -2.7378\\ 8.2151\\ 4.2857\\ -5.2790\\ -6.9080 \end{array}\displaystyle \right ),\qquad x_{218\text{,}128}= \left ( \textstyle\begin{array}{@{}c@{}} 4.7788\\ 0\\ 12.0778\\ 0\\ 0\\ -2.0434\\ 0\\ 0\\ 0\\ 25.3370\\ 2.7797\\ 2.3643\\ 0\\ 7.0517\\ 0\\ 0\\ -11.9260 \end{array}\displaystyle \right ), \end{aligned}$$
(4.5)
where
\(x_{n}\) is the point which is generated by Theorem
3.1. Then we know the convergence point of
\(x_{n}\) is the solution of problem (
1.1). Until now, it has not been easy to get an exact solution about the problem of
\(Ax=b\). Therefore, there exist a lot of algorithms to get the approximate solution about it. Also, by a series of analyses, we know that
\(x_{n}\) is very close to satisfying the problem of
\(Ax=b\). To some extent, we can say that Theorem
3.1 solved both (
1.1) and
\(Ax=b\). Further, as we know, many practical problems in applied sciences such as signal processing and image reconstruction are formulated as the problem of
\(Ax=b\). So, our theorem is very useful for solving those problems.
Table 1
\(\pmb{x_{0}=(0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0)^{T}}\)
(10−2) | 56 |
\(x_{56}\)
| 6.8466 × 10−4
| 5.8146 |
(10−4) | 157 |
\(x_{157}\)
| 8.5299 × 10−6
| 0.1769 |
(10−5) | 223,462 |
\(x_{223\text{,}462}\)
| 4.2581 × 10−7
| 0.0931 |
Table 2
\(\pmb{x_{0}=(1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ )^{T}}\)
(10−2) | 57 |
\(x_{57}\)
| 6.8700 × 10−4
| 5.9909 |
(10−4) | 154 |
\(x_{154}\)
| 8.7043 × 10−6
| 0.1797 |
(10−5) | 218,128 |
\(x_{218\text{,}128}\)
| 4.2567 × 10−7
| 0.0931 |