1 Introduction and preliminaries
Let E be a real Banach space with \(E^{*}\) its dual space. Suppose that C is a nonempty closed and convex subset of E. The symbol 〈⋅, ⋅〉 denotes the generalized duality pairing between E and \(E^{*}\). The symbols “→” and “⇀” denote strong and weak convergence either in E or in \(E^{*}\), respectively.
A Banach space
E is said to be strictly convex [
1] if for
\(\forall x,y \in E\) which are linearly independent,
$$ \Vert x + y \Vert < \Vert x \Vert + \Vert y \Vert . $$
The above inequality is equivalent to the following:
$$ \Vert x \Vert = \Vert y \Vert = 1, \quad x \ne y \quad \Rightarrow \quad \biggl\Vert \frac{x + y}{2} \biggr\Vert < 1. $$
A Banach space
E is said to be uniformly convex [
1] if for any two sequences
\(\{ x_{n} \} \) and
\(\{ y_{n} \} \) in
E such that
\(\Vert x_{n} \Vert = \Vert y_{n} \Vert = 1\) and
\(\lim_{n \to \infty} \Vert x_{n} + y_{n} \Vert = 2\),
\(\lim _{n \to \infty } \Vert x_{n} - y_{n} \Vert = 0\) holds.
If E is uniformly convex, then it is strictly convex.
The function
\(\rho_{E}:[0, + \infty ) \to [0, + \infty )\) is called the modulus of smoothness of
E [
2] if it is defined as follows:
$$ \rho_{E}(t) = \sup \biggl\{ \frac{1}{2} \bigl(\Vert x + y \Vert + \Vert x - y \Vert \bigr) - 1:x,y \in E, \Vert x \Vert = 1, \Vert y \Vert \le t \biggr\} . $$
A Banach space
E is said to be uniformly smooth [
2] if
\(\frac{\rho_{E}(t)}{t} \to 0\), as
\(t \to 0\).
The Banach space
E is uniformly smooth if and only if
\(E^{*}\) is uniformly convex [
2].
We say E has Property (H) if for every sequence \(\{ x_{n}\} \subset E\) which converges weakly to \(x \in E\) and satisfies \(\Vert x_{n} \Vert \to \Vert x \Vert \) as \(n \to \infty \) necessarily converges to x in the norm.
If E is uniformly convex and uniformly smooth, then E has Property (H).
With each
\(x \in E\), we associate the set
$$ J(x) = \bigl\{ f \in E^{*}: \langle x,f \rangle = \Vert x \Vert ^{2} = \Vert f \Vert ^{2} \bigr\} ,\quad \forall x \in E. $$
Then the multi-valued mapping
\(J:E \to 2^{E^{*}}\) is called the normalized duality mapping [
1]. Now, we list some elementary properties of
J.
For a nonlinear mapping U, we use \(F(U)\) and \(N(U)\) to denote its fixed point set and null point set, respectively; that is, \(F(U) = \{ x \in D(U):Ux = x\}\) and \(N(U) = \{ x \in D(U):Ux = 0\}\).
Maximal monotone mappings and weakly or strongly relatively non-expansive mappings are different types of important nonlinear mappings due to their practical background. Much work has been done in designing iterative algorithms either to approximate a null point of maximal monotone mappings or a fixed point of weakly or strongly relatively non-expansive mappings, see [
5‐
10] and the references therein. It is a natural idea to construct iterative algorithms to approximate common solutions of a null point of maximal monotone mappings and a fixed point of weakly or strongly relatively non-expansive mappings, which can be seen in [
11‐
15] and the references therein. Now, we list some closely related work.
In [
12], Wei et al. presented the following iterative algorithms to approximate a common element of the set of null points of the maximal monotone mapping
\(T \subset E \times E^{*}\) and the set of fixed points of the strongly relatively non-expansive mapping
\(S \subset E \times E\), where
E is a real uniformly convex and uniformly smooth Banach space:
$$\begin{aligned}& \textstyle\begin{cases} x_{1} \in E,\quad r_{1} > 0, \\ y_{n} = (J + r_{n}T)^{ - 1}J(x_{n} + e_{n}), \\ z_{n} = J^{ - 1}[\alpha_{n}Jx_{n} + (1 - \alpha_{n})Jy_{n}], \\ u_{n} = J^{ - 1}[\beta_{n}Jx_{n} + (1 - \beta_{n})JSz_{n}], \\ H_{n} = \{ z \in E:\varphi (z,z_{n}) \le \alpha_{n}\varphi (z,x_{n}) + (1 - \alpha_{n})\varphi (z,x_{n} + e_{n})\}, \\ V_{n} = \{ z \in E:\varphi (z,u_{n}) \le \beta_{n}\varphi (z,x_{n}) + (1 - \beta_{n})\varphi (z,z_{n})\}, \\ W_{n} = \{ z \in E: \langle z - x_{n},Jx_{1} - Jx _{n} \rangle \le 0\}, \\ x_{n + 1} = \Pi_{H_{n} \cap V_{n} \cap W_{n}}(x_{1}),\quad n \in N, \end{cases}\displaystyle \end{aligned}$$
(1.1)
$$\begin{aligned}& \textstyle\begin{cases} x_{1} \in E,\quad r_{1} > 0, \\ y_{n} = (J + r_{n}T)^{ - 1}J(x_{n} + e_{n}), \\ z_{n} = J^{ - 1}[\alpha_{n}Jx_{1} + (1 - \alpha_{n})Jy_{n}], \\ u_{n} = J^{ - 1}[\beta_{n}Jx_{1} + (1 - \beta_{n})JSz_{n}], \\ H_{n} = \{ z \in E:\varphi (z,z_{n}) \le \alpha_{n}\varphi (z,x_{1}) + (1 - \alpha_{n})\varphi (z,x_{n} + e_{n})\}, \\ V_{n} = \{ z \in E:\varphi (z,u_{n}) \le \beta_{n}\varphi (z,x_{1}) + (1 - \beta_{n})\varphi (z,z_{n})\}, \\ W_{n} = \{ z \in E:\langle z - x_{n},Jx_{1} - Jx _{n}\rangle \le 0\}, \\ x_{n + 1} = \Pi_{H_{n} \cap V_{n} \cap W_{n}}(x_{1}),\quad n \in N, \end{cases}\displaystyle \end{aligned}$$
(1.2)
and
$$ \textstyle\begin{cases} x_{1} \in E,\quad r_{1} > 0, \\ y_{n} = (J + r_{n}T)^{ - 1}J(x_{{n}} + e_{n}), \\ z_{n} = J^{ - 1}[\alpha_{n}Jx_{n} + (1 - \alpha_{n})Jy_{n}], \\ u_{n} = J^{ - 1}[\beta_{n}Jx_{n} + (1 - \beta_{n})JSz_{n}], \\ H_{1} = \{ z \in E:\varphi (z,z_{1}) \le \alpha_{1}\varphi (z,x_{1}) + (1 -\alpha_{1})\varphi (z,x_{1} + e_{1})\}, \\ V_{1} = \{ z \in E:\varphi (z,u_{1}) \le \beta_{1}\varphi (z,x_{1}) + (1 - \beta_{1})\varphi (z,z_{1})\}, \\ W_{1} = E, \\ H_{n} = \{ z \in H_{n - 1} \cap V_{n - 1} \cap W_{n - 1}:\varphi (z,z _{n}) \le \alpha_{n}\varphi (z,x_{n}) + (1 - \alpha_{n})\varphi (z,x_{n} + e _{n})\}, \\ V_{n} = \{ z \in H_{n - 1} \cap V_{n - 1} \cap W_{n - 1}:\varphi (z,u _{n}) \le \beta_{n}\varphi (z,x_{n}) + (1 - \beta_{n})\varphi (z,z_{n})\}, \\ W_{n} = \{ z \in H_{n - 1} \cap V_{n - 1} \cap W_{n - 1}: \langle z - x_{n},Jx_{1} - Jx_{n} \rangle \le 0\}, \\ x_{n + 1} = \Pi_{H_{n} \cap V_{n} \cap W_{n}}(x_{1}),\quad n \in N. \end{cases} $$
(1.3)
Under some mild assumptions,
\(\{ x_{n}\}\) generated by (
1.1), (
1.2), or (
1.3) is proved to be strongly convergent to
\(\Pi_{N(T) \cap F(S)}(x _{1})\). Compared to projective iterative algorithms (
1.1) and (
1.2), iterative algorithm (
1.3) is called monotone projection method since the projection sets
\(H_{{n}}\),
\(V_{n}\), and
\(W_{n}\) are all monotone in the sense that
\(H_{n + 1} \subset H_{n}\),
\(V_{n + 1} \subset V_{n}\), and
\(W_{{n} + 1} \subset W_{n}\) for
\(n \in N\). Theoretically, the monotone projection method will reduce the computation task.
In [
13], Klin-eam et al. presented the following iterative algorithm to approximate a common element of the set of null points of the maximal monotone mapping
\(A \subset E \times E^{*}\) and the sets of fixed points of two strongly relatively non-expansive mappings
\(S,T \subset C \times C\), where
C is the nonempty closed and convex subset of a real uniformly convex and uniformly smooth Banach space
E.
$$ \textstyle\begin{cases} u_{n} = J^{ - 1}[\alpha_{n}Jx_{n} + (1 - \alpha_{n})JT{z}_{ {n}}], \\ {z}_{n} = J^{ - 1}[\beta_{n}Jx_{n} + (1 - \beta_{n})JS( {J} + {r}_{{n}}{A})^{ - 1}{Jx}_{ {n}}], \\ H_{n} = \{ z \in C:\varphi (z,u_{n}) \le \varphi (z,x_{n})\}, \\ V_{n} = \{ z \in C: \langle z - x_{n},Jx_{1} - Jx_{n} \rangle \le 0\}, \\ x_{n + 1} = \Pi_{H_{n} \cap V_{n}}(x_{1}),\quad n \in N. \end{cases} $$
(1.4)
Under some assumptions,
\(\{ x_{n}\}\) generated by (
1.4) is proved to be strongly convergent to
\(\Pi_{N(A) \cap F(S) \cap F(T)}(x_{1})\).
In [
14], Wei et al. extended the topic to the case of finite maximal monotone mappings
\(\{ T_{i}\}_{i = 1}^{{m}_{1}}\) and finite strongly relatively non-expansive mappings
\(\{ S_{j}\}_{j = 1}^{ {m}_{2}}\). They constructed the following two iterative algorithms in a real uniformly convex and uniformly smooth Banach space
E:
$$ \textstyle\begin{cases} x_{1} \in E,\quad r > 0, \\ y_{n} = J^{ - 1}[\beta_{{n}}Jx_{n} + \sum_{i = 1}^{m_{1}} \beta_{n,i}J(J + rT_{i})^{ - 1}Jx_{n}], \\ x_{n + 1} = J^{ - 1}[\alpha_{n}Jx_{n} + \sum_{j = 1}^{m_{2}}\alpha_{n,j}J S_{j} y_{n}],\quad n \in N, \end{cases} $$
(1.5)
and
$$ \textstyle\begin{cases} x_{1} \in E,\quad r > 0, \\ y_{n} = J^{ - 1}[\beta_{{n}}Jx_{n} + (1 - \beta_{n})J(J + rT _{1})^{ - 1}J(J + rT_{2})^{ - 1}J \cdots (J + rT_{m_{1}})^{ - 1}Jx_{n}], \\ x_{n + 1} = J^{ - 1}[\alpha_{n}Jx_{n} + (1 - \alpha_{n})JS_{1}S_{2}\cdots S_{m_{2}}y_{n}],\quad n \in N. \end{cases} $$
(1.6)
Under some assumptions,
\(\{ x_{n}\}\) generated by (
1.5) or (
1.6) is proved to be weakly convergent to
\(v = \lim_{n \to \infty } \Pi_{( \bigcap_{i = 1}^{m_{1}}N(T_{i})) \cap ( \bigcap_{j = 1}^{m_{2}}F(S _{j}))}(x_{{n}})\).
Inspired by the previous work, in Sect.
2.1, we shall construct some new iterative algorithms to approximate the common element of the sets of null points of countable maximal monotone mappings and the sets of fixed points of countable weakly relatively non-expansive mappings. New proof techniques can be found, restrictions are mild, and error is considered. In Sect.
2.2, an example is listed and a specific iterative formula is proved. Computational experiments which show the effectiveness of the new abstract iterative algorithms are conducted. In Sect.
2.3, an application to the minimization problem is demonstrated.
The following preliminaries are also needed in our paper.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.