In this paper, we study the split DC program by using the split proximal linearized algorithm. Further, linear convergence theorem for the proposed algorithm is established under suitable conditions. As applications, we first study the DC program (DCP). Finally, we give numerical results for the proposed convergence results.
Hinweise
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1 Introduction
First, we recall the minimization problem for convex functions:
where H is a real Hilbert space and \(f: H\rightarrow(-\infty, \infty]\) is a proper, lower semicontinuous, and convex function. This is a classical convex minimization problem with many applications. To study this problem, Martinet [11] introduced the proximal point algorithm
and showed that \(\{x_{n}\}_{n\in\mathbb{N}}\) converges weakly to a minimizer of f under suitable conditions. This algorithm is useful, however, only for convex problems, because the idea for this algorithm is based on the monotonicity of subdifferential operators of convex functions. So, it is important to consider the relation between nonconvex problems and a proximal point algorithm.
The following is a well-known nonconvex problem, known as DC program:
where \(g,h:\mathbb{R}^{n}\rightarrow \mathbb{R}\) are proper lower semicontinuous and convex functions. Here, the function f is called a DC function, and functions g and h are called DC components of f. In the DC program, the convention \((+\infty)-(+\infty)=+\infty\) has been adopted to avoid the ambiguity \((+\infty)-(+\infty)\) that does not present any interest. It is well known that a necessary condition for \(x\in\operatorname{dom}(f):=\{x\in\mathbb{R}^{n}: f(x)<\infty\}\) to be a local minimizer of f is \(\partial h(x)\subseteq \partial g(x)\), where \(\partial g(x)\) and \(\partial h(x)\) are the subdifferentials of g and h, respectively (see Definition 2.4). But this condition is hard to be reached. So, many researchers focus their attention on finding points such that \(\partial h(x)\cap\partial g(x) \neq\emptyset\), where x is called a critical point of f [8].
Anzeige
It is worth mentioning the richness of the class of DC functions which is a subspace containing the class of lower-\(\mathcal{C}^{2}\) functions. In particular, \(\mathcal{DC}(\mathbb{R}^{n})\) contains the space \(\mathcal{C}^{1,1}\) of functions whose gradient is locally Lipschitz continuous. Further, \(\mathcal{DC}(\mathbb{R}^{n})\) is closed under the operations usually considered in optimization. For example, a linear combination, a finite supremum, or the product of two DC functions remain DC. It is also known that the set of DC functions defined on a compact convex set of \(\mathbb{R}^{n}\) is dense in the set of continuous functions on this set.
We also observed that the interest in the theory of DC functions has much increased in the last years. Some interesting optimality conditions and duality theorems related to the DC program have been given (for example, see [6, 7, 14]). Some algorithms for the DC program are proposed to analyze and solve a variety of highly structured and practical problems (for example, see [13]).
In 2003, Sun, Sampaio, and Candido [16] gave the following algorithm to study problem (DCP).
Let \(\{\beta_{n}\} _{n\in\mathbb{N}}\) be a sequence in \((0,\infty)\), and let \(g,h:\mathbb{R}^{k}\rightarrow \mathbb{R}\) be proper lower semicontinuous and convex functions. Let \(\{x_{n}\}_{n\in\mathbb{N}}\) be generated by
Let \(\{\beta_{n}\} _{n\in\mathbb{N}}\) be a sequence in \((0,\infty)\), and let \(g,h:\mathbb{R}^{k}\rightarrow \mathbb{R}\) be proper lower semicontinuous and convex functions. Let \(\{x_{n}\}_{n\in\mathbb{N}}\) be generated by
Let\(g,h:\mathbb{R}^{k}\rightarrow \mathbb{R}\cup\{+\infty\}\)be proper, lower semicontinuous, and convex functions, and\(g-h\)be bounded from below. Suppose thatgisρ-strongly convex, his differentiable, and\(\nabla h(x)\)isL-Lipschitz continuous. Let\(\{\beta_{n}\}_{n\in \mathbb{N}}\)be a bounded sequence with\(\liminf_{n\rightarrow \infty}\beta _{n}>0\). Let\(\{x_{n}\}_{n\in\mathbb{N}}\)be generated by Algorithm1.2. If\(\rho>2L\), then\(\{x_{n}\}_{n\in\mathbb {N}}\)converges linearly to a critical pointx̄of problem (DCP).
In this paper, we want to study the split DC program:
$$ \text{Find } \bar{x}\in H_{1} \text{ such that }\bar{x}\in \operatorname{arg}\min _{x\in H_{1}}f_{1}(x) \text{ and } A\bar{x}\in \operatorname{arg}\min _{y\in H _{2}}f_{2}(y), $$
(SDCP)
where \(H_{1}\) and \(H_{2}\) are real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) is a nonzero linear and bounded mapping with adjoint operator \(A^{*}\), \(g_{1},h_{1}:H_{1}\rightarrow \mathbb{R}\) are proper lower semicontinuous and convex functions, and \(g_{2},h_{2}:H_{2}\rightarrow \mathbb{R}\) are proper lower semicontinuous and convex functions, and \(f_{1}(x)=g_{1}(x)-h_{1}(x)\) for all \(x\in H_{1}\), and \(f_{2}(y)=g_{2}(y)-h_{2}(y)\) for all \(y\in H_{2}\).
Clearly, (SDCP) is a generalization of problem (DCP). Indeed, if \(H_{1}=H_{2}=\mathbb{R}^{n}\), \(A:\mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\) is the identity mapping, \(g_{1}=g_{2}\), and \(h_{1}=h_{2}\), then problem (SDCP) is reduced to problem (DCP).
If \(h_{1}(x)=0\) and \(h_{2}(y)=0\) for all \(x\in H_{1}\) and \(y\in H_{2}\), then (SDCP) is reduced to the split minimization problems (SMP) for convex functions:
$$\begin{aligned}& \mbox{Find }\bar{x}\in H_{1}\mbox{ such that }g_{1}( \bar{x})= \min_{u\in H_{1}}g_{1}(u)\mbox{ and }g_{2}(A\bar{x})=\min_{v\in H_{2}}g _{2}(v), \end{aligned}$$
(SMP)
where \(H_{1}\) and \(H_{2}\) are real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) is a linear and bounded mapping with adjoint \(A^{*}\), \(g_{1}:H_{1}\rightarrow \mathbb{R}\) and \(g_{2}:H_{2}\rightarrow \mathbb{R}\) are proper, lower semicontinuous, and convex functions. For example, one can see [4] and the related references.
If \(H_{1}=H_{2}=H\) and \(A:H\rightarrow H\) is the identity mapping, then problem (SMP) is reduced to the common minimization problem for convex functions:
$$\begin{aligned}& \mbox{Find }\bar{x}\in H\mbox{ such that }g_{1}(\bar{x})= \min _{u\in H}g_{1}(u)\mbox{ and }g_{2}(\bar{x})= \min_{v\in H}g_{2}(v), \end{aligned}$$
(CMP)
where H is a real Hilbert space, \(g_{1},g_{2}:H\rightarrow \mathbb {R}\) are proper, lower semicontinuous, and convex functions. Further, if the solution set of problem (CMP) is nonempty, then problem (CMP) is equivalent to the following problem:
$$\begin{aligned}& \mbox{Find }\bar{x}\in H\mbox{ such that }g_{1}(\bar{x})+g _{2}(\bar{x})=\min_{u\in H}g_{1}(u)+g_{2}(u), \end{aligned}$$
(MP2)
where H is a real Hilbert space, \(g_{1},g_{2}:H\rightarrow \mathbb {R}\) are proper, lower semicontinuous, and convex functions. This problem is well known and it has many important applications, including multiresolution sparse regularization, Fourier regularization, hard-constrained inconsistent feasibility, and alternating projection signal synthesis problems. For example, one can refer to [5, 9] and the related references.
On the other hand, Moudafi [12] introduced the split variational inclusion problem, which is a generalization of problem (SMP):
$$\begin{aligned}& \mbox{Find }\bar{x}\in H_{1}\mbox{ such that }0_{H_{1}}\in B_{1}( \bar{x})\mbox{ and }0_{H_{2}}\in B_{2}(A \bar{x}), \end{aligned}$$
(SVIP)
where \(H_{1}\) and \(H_{2}\) are real Hilbert spaces, \(B_{1}:H_{1}\multimap H _{1}\) and \(B_{2}:H_{2}\multimap H_{2}\) are set-valued maximal monotone mappings, \(A:H_{1}\rightarrow H_{2}\) is a linear and bounded operator, and \(A^{*}\) is the adjoint of A. Here, \(0_{H_{1}}\) and \(0_{H_{2}}\) are zero elements of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. To study problem (SVIP), Byrne et al. [3] gave the following algorithm and related convergence theorem in infinite dimensional Hilbert space.
Let\(H_{1}\)and\(H_{2}\)be real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\)be a nonzero linear and bounded operator, and\(A^{*}\)denote the adjoint operator ofA. Let\(B_{1}:H_{1}\multimap H_{1}\)and\(B_{2}:H_{2}\multimap H_{2}\)be set-valued maximal monotone mappings, \(\beta>0\), and\(\gamma\in(0,\frac {2}{\Vert A\Vert ^{2}})\). LetΩbe the solution set of (SVIP), and suppose that\(\varOmega\neq\emptyset\). Let\(\{x_{n}\} _{n\in\mathbb{N}}\)be defined by
Then\(\{x_{n}\}\)converges weakly to an element\(\bar{x}\in\varOmega\).
If \(B_{1}=\partial g_{1}\) and \(B_{2}=\partial g_{2}\) (the subdifferential of \(g_{i}\), \(i=1,2\)), then the algorithm given by Theorem 1.2 is reduced to the following algorithm:
In this paper, motivated by the above works on DC programs and related problems, we want to study problem (SDCP) by using the split proximal linearized algorithm:
where \(H_{1}\) and \(H_{2}\) are real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) is a linear and bounded mapping with adjoint \(A^{*}\), \(g_{1},h_{1}:H_{1}\rightarrow \mathbb{R}\) are proper lower semicontinuous and convex functions, and \(g_{2},h_{2}:H_{2}\rightarrow \mathbb{R}\) are proper lower semicontinuous and convex functions, \(g_{1}\) and \(g_{2}\) are strongly convex, \(h_{1}\) and \(h_{2}\) are Fréchet differentiable, \(\nabla h_{1}\) and \(\nabla h_{2}\) are L-Lipschitz continuous, and \(f_{1}(x)=g_{1}(x)-h_{1}(x)\) for all \(x\in H_{1}\), and \(f_{2}(y)=g _{2}(y)-h_{2}(y)\) for all \(y\in H_{2}\). Further, linear convergence theorems for the proposed algorithms are established under suitable conditions. Finally, we give numerical results for the proposed convergence theorems.
2 Preliminaries
Let H be a (real) Hilbert space with the inner product \(\langle \cdot,\cdot\rangle\) and the norm \(\Vert \cdot\Vert \). We denote the strong and weak convergence of \(\{x_{n}\}_{n\in\mathbb{N}}\) to \(x\in H\) by \(x_{n}\rightarrow x\) and \(x_{n}\rightharpoonup x\), respectively. For each \(x,y,u,v\in H\) and \(\lambda\in\mathbb{R}\), we have
$$\begin{aligned}& \Vert x+y \Vert ^{2}= \Vert x \Vert ^{2}+2\langle x,y\rangle+ \Vert y \Vert ^{2}, \end{aligned}$$
f is Gâteaux differentiable at \(x\in H\) if there is \(\nabla f(x)\in H\) such that
$$ \lim_{t\rightarrow0}\frac{f(x+ty)-f(x)}{t}=\bigl\langle y, \nabla f(x) \bigr\rangle $$
for each \(y\in H\).
(vi)
f is Fréchet differentiable at x if there is \(\nabla f(x)\) such that
$$ \lim_{y\rightarrow0} \frac{f(x+y)-f(x)-\langle\nabla f(x),y\rangle}{ \Vert y \Vert }=0. $$
Remark 2.1
Let H be a real Hilbert space. Then \(f(x):=\Vert x\Vert ^{2}\) is a 2-strongly convex function. Besides, we know \(g:H\rightarrow \mathbb {R}\) is ρ-strongly convex if and only if \(g-\frac{\rho}{2}\Vert \cdot \Vert ^{2}\) is convex [1, Proposition 10.6].
Example 2.1
Let \(g(x):=\frac{1}{2}\langle Qx,x\rangle-\langle x,b\rangle\), where \(Q\in\mathbb{R}^{n\times n}\) is a real symmetric positive definite matrix and \(b\in\mathbb{R}^{n}\). Then g is a strongly convex function.
Definition 2.4
Let \(f:H\rightarrow(-\infty,\infty]\) be a proper lower semicontinuous and convex function. Then the subdifferential ∂f of f is defined by
$$ \partial f(x):=\bigl\{ x^{*}\in H: f(x)+\bigl\langle y-x,x^{*} \bigr\rangle \leq f(y)\text{ for each }y\in H\bigr\} $$
for each \(x\in H\).
Lemma 2.1
Let\(f:H\rightarrow(-\infty,\infty]\)be a proper lower semicontinuous and convex function. Then the following are satisfied:
(i)
∂fis a set-valued maximal monotone mapping.
(ii)
fis Gâteaux differentiable at\(x\in \operatorname{int}( \mathcal{D})\)if and only if\(\partial f(x)\)consists of a single element. That is, \(\partial f(x)=\{\nabla f(x)\}\) [2, Proposition 1.1.10].
(iii)
Suppose thatfis Fréchet differentiable. Thenfis convex if and only if ∇fis a monotone mapping.
Let\(\rho>0\), Hbe a real Hilbert space and\(f:H\rightarrow \mathbb {R}\)be a proper, lower-semicontinuous, and convex function. Iffisρ-strongly convex, then∂fisρ-strongly monotone.
LetHbe a real Hilbert space and\(f:H\rightarrow (\infty,\infty]\)be a proper, lower semicontinuous, and convex function. If\(\{u_{n}\}_{n\in \mathbb{N}}\)and\(\{x_{n}\}_{n\in\mathbb{N}}\)are sequences inHwith\(u_{n}\in\partial f(x_{n})\)for all\(n\in\mathbb{N}\), and\(x_{n}\rightharpoonup x\)and\(u_{n}\rightarrow u\), then\(u\in\partial f(x)\).
LetHbe a real Hilbert space, \(B:H\multimap H\)be a set-valued maximal monotone mapping, \(\beta >0\), and\(J_{\beta}^{B}\)be defined by\(J_{\beta}^{B}(x):=(I+\beta B)^{-1}(x)\)for each\(x\in H\). Then\(J_{\beta}^{B}\)is a single-valued mapping.
3 Split proximal linearized algorithm
Throughout this section, we use the following notations and assumptions. Let \(\rho\geq L>0\). Let \(H_{1}\) and \(H_{2}\) be finite dimensional real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a nonzero linear and bounded mapping, \(A^{*}\) be the adjoint of A, \(g_{1},h_{1}:H_{1}\rightarrow \mathbb{R}\) be proper lower semicontinuous and convex functions, \(g_{2},h_{2}:H_{2}\rightarrow \mathbb{R}\) be proper lower semicontinuous and convex functions, \(f_{1}(x)=g_{1}(x)-h_{1}(x)\) for all \(x\in H_{1}\), and \(f_{2}(y)=g_{2}(y)-h_{2}(y)\) for all \(y\in H_{2}\). Further, we assume that \(f_{1}\) and \(f_{2}\) are bounded from below, \(h_{1}\) and \(h_{2}\) are Fréchet differentiable, \(\nabla h_{1}\) and \(\nabla h _{2}\) are L-Lipschitz continuous, \(g_{1}\) and \(g_{2}\) are ρ-strongly convex. Let \(\beta\in(0,\infty)\), and let \(\{\beta_{n}\}_{n\in\mathbb{N}}\) be a sequence in \([a,b]\subseteq(0, \infty)\). Let \(r\in(0,\frac{1}{\Vert A\Vert ^{2}})\) and \(\{r_{n}\}_{n\in \mathbb{N}}\) be a sequence in \((0,\frac{1}{\Vert A\Vert ^{2}})\). Let \(\varOmega_{\text{SDCP}}\) be defined by
Since \(w\in\varOmega_{\text{SDCP}}\), we know that \(\nabla h_{2}(Aw) \in\partial g_{2}(Aw)\). By Lemma 2.2, \(\partial g_{2}\) is ρ-strongly monotone, and then
By (3.9), \(\lim_{n\rightarrow \infty}\Vert x_{n}-w\Vert \) exists and \(\{x_{n}\}_{n\in\mathbb{N}}\) is a bounded sequence. Further, \(\{Ax_{n}\}_{n\in\mathbb{N}}\), \(\{y_{n}\}_{n\in\mathbb{N}}\), \(\{z_{n}\}_{n\in\mathbb{N}}\) are bounded sequences. By (3.9) again, we know that
It follows from \(\{\beta_{n}\}_{n\in\mathbb{N}}\subseteq (a,b)\), \(0<\liminf_{n\rightarrow \infty}r_{n}\leq\limsup_{n\rightarrow \infty}r_{n}< \frac{1}{\Vert A\Vert ^{2}}\), and (3.11) that
Since \(\{x_{n}\}_{n\in\mathbb{N}}\) is bounded, there exists a subsequence \(\{x_{n_{k}}\}_{k\in\mathbb{N}}\) of \(\{x_{n}\}_{n\in \mathbb{N}}\) such that \(x_{n_{k}}\rightarrow \bar{x}\in H_{1}\). Clearly, \(Ax_{n_{k}}\rightarrow A\bar{x}\), \(z_{n_{k}}\rightarrow \bar{x}\), \(Az_{n_{k}}\rightarrow A \bar{x}\), \(y_{n_{k}}\rightarrow A\bar{x}\), and \(x_{n_{k}+1}\rightarrow \bar{x}\). Further, by (3.2), we obtain
By (3.14) and (3.15), we know that \(\bar{x}\in\varOmega_{\text{SDCP}}\). Further, \(\lim_{n\rightarrow \infty}\Vert x_{n}-\bar{x}\Vert =\lim_{k\rightarrow \infty }\Vert x_{n_{k}}-\bar{x}\Vert =0\). Therefore, the proof is completed. □
Remark 3.1
(i)
In Algorithm 3.1, if \(y_{n}=Ax _{n}\) and \(x_{n+1}=z_{n}\), then \(x_{n}=z_{n}\), and this implies that \(\nabla h_{1}(x_{n})\in\partial g_{1}(x_{n})\) and \(\nabla h_{2}(Ax _{n})\in\partial g_{2}(Ax_{n})\). Thus, \(x_{n}\in\varOmega_{\text{SDCP}}\).
(ii)
In Algorithm 3.1, if \(x_{n+1} \neq z_{n}\), then \(f_{1}(x_{n+1})< f_{1}(z_{n})\). Indeed, it follows from \(\partial h_{1}(z_{n})=\{\nabla h_{1}(z_{n})\}\) and the definition of \(x_{n+1}\) that
So, if \(x_{n+1}\neq z_{n}\), then \(f_{1}(x_{n+1})< f_{1}(z_{n})\).
(iii)
In Algorithm 3.1, if \(y_{n} \neq Ax_{n}\), then \(f_{2}(y_{n})< f_{2}(Ax_{n})\). Indeed, it follows from \(\partial h_{2}(Ax_{n})=\{\nabla h_{2}(Ax_{n})\}\) and the definition of \(y_{n}\) that
where \(I_{H_{2}}\) is the identity mapping on \(H_{2}\). Since \(g_{2}\) is proper, lower semicontinuous, and convex, we know that \(\partial g_{2}\) is maximal monotone. So, by Lemma 2.5, we determine that
In fact, we observe that the idea of Algorithm 3.2 is the same as the proposed algorithm by Sun, Sampaio, and Candido [16, Algorithm 2.3]. Hence, a modified algorithm and related convergence theorems could be presented by using the idea of [16, Algorithm 5.3].
Remark 3.3
Under the assumptions in this section, consider the following:
For this equivalence relation, we only need to show that \(x=z\in \varOmega_{\text{SDCP}}\) implies that \(Ax=y\) and \(z=w\). Indeed, since \(x=z\in\varOmega_{\text{SDCP}}\), we know that \(\nabla h_{1}(z)\in \partial g_{1}(z)\) and \(\nabla h_{2}(Ax)\in\partial g_{2}(Ax)\). By Lemma 2.5,
By (3.21) and (3.23), we know that \(Ax=y\) and \(z=w\). □
Remark 3.4
In Algorithm 3.1, if \(\beta_{n}=\beta\) and \(r_{n}=r\) for each \(n\in\mathbb{N}\), and \(x_{N+1}=x_{N}\) for some \(N\in\mathbb{N}\), then \(x_{n}=x_{N}\), \(y_{n}=y_{N}\), and \(z_{n}=z _{N}\) for each \(n\in\mathbb{N}\) with \(n\geq N\). By Theorem 3.1, we know that \(\lim_{n\rightarrow \infty}x_{n}=x _{N}\in\varOmega_{\text{SDCP}}\). So, \(x_{n+1}=x_{n}\) could be set as a stop criterion in Algorithm 3.1. Further, from (3.21), we have
Then\(x\in\varOmega_{\mathrm{SDCP}}\)if and only if\(x=w\).
Next, we give another convergence theorem for the split proximal linearized algorithm under different assumptions on \(\{r_{n}\}_{n \in\mathbb{N}}\).
Theorem 3.2
Let\(\{r_{n}\}_{n\in\mathbb{N}}\)be a sequence in\((0,\frac {1}{\Vert A\Vert ^{2}})\)with\(\lim_{n\rightarrow \infty}r_{n}=0\)and\(\sum_{n=1}^{\infty }r_{n}=\infty\). Let\(\{x_{n}\}_{n\in\mathbb{N}}\)be generated by Algorithm3.1. Then\(\{x_{n}\}_{n\in\mathbb{N}}\)converges to an element\(\bar{x}\in\varOmega_{\mathrm{SDCP}}\).
Proof
Take any \(w\in\varOmega_{\mathrm{SDCP}}\) and \(n\in\mathbb{N}\), and let w and n be fixed. From the proof of Theorem 3.1, we have
$$\begin{aligned}& \nabla h_{2}(Ax_{n})-\frac{1}{\beta_{n}}(y_{n}-Ax_{n}) \in\partial g _{2}(Ax_{n}), \end{aligned}$$
(3.25)
$$\begin{aligned}& \nabla h_{1}(z_{n})-\frac{1}{\beta_{n}}(x_{n+1}-z_{n}) \in\partial g _{1}(x_{n+1}), \end{aligned}$$
Then there exist a subsequence \(\{y_{n_{k}}\}_{k\in\mathbb{N}}\) of \(\{y_{n}\}_{n\in\mathbb{N}}\), a subsequence \(\{z_{n_{k}}\}_{k\in \mathbb{N}}\) of \(\{z_{n}\}_{n\in\mathbb{N}}\), and \(\bar{x}\in H_{1}\) such that \(z_{n_{k}}\rightarrow \bar{x}\), \(y_{n_{k}}\rightarrow A\bar{x}\), and
Clearly, \(x_{n_{k}}\rightarrow \bar{x}\), \(x_{n_{k}+1}\rightarrow \bar{x}\), and \(Ax_{n_{k}}\rightarrow A\bar{x}\). By (3.25) and (3.26), we know that \(\bar{x}\in \varOmega_{\text{SDCP}}\). Thus, \(\bar{x}=\hat{x}\). Since \(\lim_{n\rightarrow \infty}\Vert x_{n}-\bar{x}\Vert \) exists, we know \(\lim_{n\rightarrow \infty}\Vert x _{n}-\bar{x}\Vert =\lim_{k\rightarrow \infty}\Vert x_{n_{k}}-\bar{x}\Vert =0\). Therefore, the proof is completed. □
4 Application to the DC program and numerical results
Let \(\rho>L\geq0\). Let H be a finite dimensional Hilbert space, \(g,h:H\rightarrow \mathbb{R}\) be proper, lower semicontinuous, and convex functions. Besides, we also assume that h is Fréchet differentiable, ∇h is L-Lipschitz continuous, g is ρ-strongly convex. Let \(\{\beta_{n}\}_{n\in\mathbb{N}}\) be a sequence in \((a,b)\subseteq(0,\infty)\). Let \(\{r_{n}\}_{n\in \mathbb{N}}\) be a sequence in \((0,1)\) with \(0<\liminf_{n\rightarrow \infty}r _{n}\leq \limsup_{n\rightarrow \infty}r_{n}<1\).
and assume that \(\varOmega_{\text{DCP}}\neq\emptyset\). If \(H_{1}=H_{2}=H\), \(g_{1}=g_{2}=g\), and \(h_{1}=h_{2}=h\), then we get the following algorithm and convergence theorem from Algorithm 3.1 and Theorem 3.1, respectively.
Let\(\{x_{n}\}_{n\in\mathbb{N}}\)be generated by Algorithm4.1. Then\(\{x_{n}\}_{n\in\mathbb{N}}\)converges to an element\(\bar {x}\in\varOmega_{\mathrm{DCP}}\).
In fact, we can get the following algorithm and convergence theorem by Algorithm 4.1 and Theorem 4.1, respectively.
Let\(\{x_{n}\}_{n\in\mathbb{N}}\)be generated by Algorithm4.2. Then\(\{x_{n}\}_{n\in\mathbb{N}}\)converges to an element\(\bar {x}\in\varOmega_{\mathrm{DCP}}\).
Example 4.1
Let \(g,h:\mathbb{R}^{3}\rightarrow \mathbb{R}\) be defined by \(g(x_{1},x_{2},x_{3}):=2x_{1}^{2}+2x_{2}^{2}+2x_{3}^{2}\) and \(h(x_{1},x_{2},x_{3}):=4x_{1}+8x_{2}+12x_{3}\) for all \((x_{1},x_{2},x_{3})\in\mathbb{R}^{3}\). Then \(\varOmega_{\text{DCP}}:=\{x\in H: \nabla h(x)\in\partial g(x)\}=\{ (1,2,3)\}\).
Example 4.2
Let \(g_{1},h_{1}:\mathbb{R}^{3}\rightarrow \mathbb{R}\) be defined by \(g_{1}(x_{1},x_{2},x_{3}):=2x_{1}^{2}+2x_{2}^{2}+2x_{3}^{2}\) and \(h_{1}(x_{1},x_{2},x_{3}):=4x_{1}+8x_{2}+12x_{3}\) for all \((x_{1},x_{2},x_{3})\in\mathbb{R}^{3}\). Let \(g_{2},h_{2}:\mathbb{R}^{2}\rightarrow \mathbb{R}\) be defined by \(g_{2}(y_{1},y_{2}):=y_{1}^{2}+y_{2}^{2}\) and \(h_{2}(y_{1},y_{2}):=28y_{1}+64y_{2}\) for all \((y_{1},y_{2})\in\mathbb {R}^{2}\). Let \(A:\mathbb{R}^{3}\rightarrow \mathbb{R}^{2}\) be defined by \(A(x_{1},x_{2},x_{3})=(x_{1}+2x_{2}+3x_{3},4x_{1}+5x_{2}+6x_{3})\) for all \((x_{1},x_{2},x_{3})\in\mathbb{R}^{3}\). Here, \(\Vert A\Vert \approx0.10517\) and \(\varOmega_{\text{SDCP}}=\{(1,2,3)\}\).
From Table 1, we see that Algorithm 4.2 reaches the required errors only need six iterations if \(\beta_{n}=500\) for all \(n\in \mathbb{N}\), but Algorithm 4.2 reaches the required errors need 73 iterations if \(\beta_{n}=0.1\) for all \(n\in\mathbb{N}\).
From Table 2, we see that Algorithm 4.2 reaches the required errors only need seven iterations if \(\beta_{n}=100\) for all \(n\in\mathbb{N}\), but Algorithm 4.2 reaches the required errors need 72 iterations if \(\beta_{n}=0.1\) for all \(n\in\mathbb{N}\).
From Table 3, we see that Algorithm 3.1 reaches the required errors only need seven iterations if \(\beta_{n}=700\) for all \(n\in\mathbb{N}\), but Algorithm 3.1 reaches the required errors need 99 iterations if \(\beta_{n}=0.1\) for all \(n\in\mathbb{N}\).
From Table 3 and Table 4, we see that Algorithm 3.1 reaches the required errors need 283 iterations if \(\beta_{n}=1\) and \(r_{n}=0.05\) for all \(n\in\mathbb{N}\), but Algorithm 3.1 reaches the required errors need 39 iterations if \(\beta_{n}=1\) and \(r_{n}=0.09\) for all \(n\in\mathbb{N}\). On the other hand, for other settings of \(\beta_{n}\), we know the numerical results in Table 3 and Table 4 show that there are no significant differences in the setting of \(\{r_{n}\}_{n\in\mathbb{N}}\).
However, in Table 5, if \(\beta_{n}=1\) for all \(n\in\mathbb{N}\), then we know the numerical results have big differences in the setting of \(\{r_{n}\}_{n\in\mathbb{N}}\).
The authors would like to thank referee(s) for many useful comments and suggestions for the improvement of the article. Prof. Chi-Ming Chen was supported by Grant No. MOST 107-2115-M-007-008 of the Ministry of Science and Technology of the Republic of China.
Competing interests
The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.