1 Introduction
First, we give some definitions for the main results.
In 2008, Suzuki [
1] introduced a class of single-valued mappings, called Suzuki-generalized nonexpansive mappings, as follows.
In [
1], Suzuki proved the existence of the fixed point and convergence theorems for mappings satisfying condition
\((C)\) in Banach spaces. In the same space setting under certain conditions Dhompongsa
et al. [
2] improved the results of Suzuki [
1] and obtained a fixed point result for mappings with condition
\((C)\).
In 2011, Karapınar
et al. [
3], proposed some new classes of mappings which significantly generalized the notion of Suzuki-type nonexpansive mappings as follows.
From the above definition, it is clear that every nonexpansive mapping satisfies condition \(SKC\), but the converse is not true, as becomes clear from the following examples.
T obeys condition
\(SKC\). Suppose that
\(( x, y) =(0, 0)\) and
\((x, y) = (1,1)\), then
$$\frac{1}{2} d\bigl(T(0,0), (0,0) \bigr) \leq d\bigl((0,0), (1,1) \bigr) $$
and
$$\begin{aligned} \begin{aligned} N\bigl((0,0), (1,1)\bigr) = {}& \max \biggl\{ d\bigl((0,0), (1,1)\bigr), \frac{1}{2} \bigl[d\bigl(T(0,0), (0,0)\bigr) + d\bigl(T(1,1),(1,1)\bigr) \bigr], \\ &{} \frac{1}{2} \bigl[d\bigl(T (1,1),(0,0)\bigr) + d\bigl(T(0,0), (1,1) \bigr) \bigr] \biggr\} \\ ={}& 1, \end{aligned} \end{aligned}$$
thus
$$d\bigl(T(0,0), T(1,1)\bigr) =1 \leq N\bigl((0,0), (1,1)\bigr)=1. $$
One can check that the condition of the
\(SKC\)-mapping holds for the other point of the space
X. Note that
\(F(T) = \{(1,1)\} \neq\phi\), and
\(F(T)\) is closed and convex.
In the framework of
\(\operatorname {CAT}(0)\) spaces one gave some characterization of existing fixed point results for mappings with condition
\((C)\). In [
4], Abbas
et al. extended the result of Nanjaras
et al. [
5] for the class of
\(SKC\)-mappings and proved some strong and Δ-convergence results for a finite family of
\(SKC\)-mappings using an Ishikawa-type iteration process in the framework of
\(\operatorname {CAT}(0)\) spaces (see [
4]).
On the other hand, the following fixed point iteration processes have been extensively studied by many authors for approximating either fixed points of nonlinear mappings (when these mappings are already known to have fixed points) or solutions of nonlinear operator equations.
(M)
The Mann iteration process (see [
6,
7]) is defined as follows:
For
C, a convex subset of Banach space
X, and a nonlinear mapping
T of
C into itself, for each
\(x_{0}\in C\), the sequence
\(\{ x_{n}\} \) in
C is defined by
$$\begin{aligned} x_{n+1} =& ( 1- \alpha_{n}) x_{n} + \alpha_{n} Tx_{n} = M(x_{n}, \alpha _{n}, T),\quad n \in\mathbb{N}, \end{aligned}$$
(1.1)
where
\(\{\alpha_{n}\}\) is a real sequence in
\([0, 1]\) which satisfies the following conditions:
(M1)
\(0 \leq\alpha_{n} <1 \),
(M2)
\(\lim_{n\to \infty} \alpha_{n} = 0\),
(M3)
\(\sum_{n =1}^{\infty} \alpha_{n} = \infty\).
In some applications condition (M
3) is replaced by the condition
\(\sum_{n=1}^{\infty} \alpha_{n} ( 1-\alpha_{n}) = \infty\).
(I)
The Ishikawa iteration process (see [
6,
8]) is defined as follows:
With
C,
X, and
T as in (M), for each
\(x_{0}\in C\), the sequence
\(\{ x_{n} \}\) in
C is defined by
$$\begin{aligned} x_{n+1} = &( 1- \alpha_{n}) x_{n} + \alpha_{n} T\bigl[(1-\beta_{n}) x_{n} + \beta_{n} Tx_{n}\bigr],\quad n \in\mathbb{N}, \end{aligned}$$
(1.2)
where
\(\{\alpha_{n}\}\) and
\(\{\beta_{n}\}\) are sequences in
\([0,1]\) which satisfy the following conditions:
(I1)
\(0\leq\alpha_{n} \leq \beta_{n} < 1\),
(I2)
\(\lim_{n \to\infty} \beta_{n} =0\),
(I3)
\(\sum_{n =1}^{\infty} \alpha_{n} \beta_{n} = \infty\).
It is clear that the process (M) is not a special case of the process (I) because of condition (I
1). In some papers (see [
9‐
13]) condition (I
1)
\(0\leq\alpha_{n} \leq\beta_{n} < 1\) has been replaced by the general condition (
\(\mathrm{I}_{1}^{'}\))
\(0< \alpha _{n}, \beta_{n} <1\). With this general setting, the process (I) is a natural generalization of the process (M). It is observed that, if the process (M) is convergent, then the process (I) with condition (
\(\mathrm{I}_{1}^{'}\)) is also convergent under suitable conditions on
\(\alpha_{n}\) and
\(\beta_{n}\).
Recently, Agarwal
et al. [
14] introduced the
S-iteration process as follows. For
C, a convex subset of a linear space
X, and
T be a mapping of
C into itself, the iterative sequence
\(\{x_{n}\}\) of the
S-iteration process is generated from
\(x_{0} \in C\) is defined by
$$ \textstyle\begin{cases} x_{n+1} = ( 1- \alpha_{n}) Tx_{n} + \alpha_{n} Ty_{n}, \\ y_{n} = ( 1- \beta_{n}) x_{n} + \beta_{n} Tx_{n},\quad n \in\mathbb{N}, \end{cases} $$
(1.3)
where
\(\{\alpha_{n}\}\) and
\(\{\beta_{n}\}\) are sequences in
\((0, 1)\) satisfying the condition
$$\sum_{n =0}^{\infty} \alpha_{n} \beta_{n} ( 1-\beta_{n})= \infty. $$
It is easy to see that neither the process (M) nor the process (I) reduces to an
S-iteration process and vice versa. Thus, the
S-iteration process is independent of the Mann [
7] and Ishikawa [
8] iteration processes (see [
6,
14,
15]).
It is observed that the rate of convergence of the
S-iteration process is similar to the Picard iteration process, but faster than the Mann iteration process for a contraction mapping (see [
6,
14,
15]).
On the other hand, in [
16], Leuştean proved that
\(\operatorname {CAT}(0)\) spaces are uniformly convex hyperbolic spaces with a modulus of uniform convexity
\(\eta(r, \varepsilon) = \frac{\varepsilon^{2}}{8}\) quadratic in
ε. Therefore, we know that the class of uniformly convex hyperbolic spaces is a generalization of both uniformly convex Banach spaces and
\(\operatorname {CAT}(0)\) spaces.
We consider the following definition of a hyperbolic space introduced by Kohlenbach [
17], and, also, Zhao
et al. [
18] and Kim
et al. [
19] got some convergence results in a hyperbolic space setting.
A metric space is said to be a
convex metric space in the sense of Takahashi [
20], where a triple
\((X, d, W) \) satisfies only (W1) (see [
21]). We get the notion of the space of hyperbolic type in the sense of Goebel and Kirk [
22], where a triple
\((X, d, W)\) satisfies (W1)-(W3). Condition (W4) was already considered by Itoh [
23] under the name of ‘condition III’ and it is used by Reich and Shafrir [
24] and Kirk [
25] to define their notions of hyperbolic spaces.
The class of hyperbolic spaces include normed spaces and convex subsets thereof, the Hilbert space ball equipped with the hyperbolic metric [
26], the Hadamard manifold, and the
\(\operatorname {CAT}(0)\) spaces in the sense of Gromov (see [
27]).
A subset
C of hyperbolic space
X is convex if
\(W( x, y, \alpha) \in C\) for all
\(x, y \in C\) and
\(\alpha\in[0,1]\). If
\(x, y \in X\) and
\(\lambda\in[0,1]\), then we use the notation
\((1-\lambda)x \oplus\lambda y\) for
\(W(x, y, \lambda)\). The following holds even for the more general setting of a convex metric space [
20,
21]: for all
\(x, y\in X\) and
\(\lambda\in[0,1]\),
$$d\bigl( x, (1-\lambda)x \oplus\lambda y\bigr) = \lambda d( x, y) $$
and
$$d\bigl( y, (1-\lambda)x \oplus\lambda y\bigr)= ( 1-\lambda) d( x, y). $$
A hyperbolic space
\((X, d, W)\) is
uniformly convex [
16] if, for any
\(r > 0\) and
\(\varepsilon\in(0, 2]\), there exists
\(\delta \in(0, 1]\) such that, for all
\(a, x, y \in X\),
$$\begin{aligned} d \biggl(\frac{1}{2} x \oplus\frac{1}{2} y, a \biggr) \leq& ( 1- \delta ) r, \end{aligned}$$
provided
\(d(x, a) \leq r\),
\(d(y, a) \leq r\), and
\(d(x, y) \geq \varepsilon r\).
A mapping \(\eta: (0, \infty) \times(0, 2] \to(0,1]\) providing such a \(\delta= \eta(r, \varepsilon)\) for given \(r>0\) and \(\varepsilon \in(0, 2]\), is called a modulus of uniform convexity. We say that η is monotone if it decreases with r for fixed ε.
The purpose of this paper is to prove some strong and Δ-convergence theorems of the
S-iteration process which is generated by
\(SKC\)-mappings in uniformly convex hyperbolic spaces. Our results can be viewed as an extension and a generalization of several well-known results in Banach spaces as well as
\(\operatorname {CAT}(0)\) spaces (see [
1‐
6,
15,
21,
28‐
30]).
2 Preliminaries
First, we give the concept of Δ-convergence and some of its basic properties.
Let
C be a nonempty subset of metric space
\((X, d)\) and let
\(\{ x_{n}\}\) be any bounded sequence in
X. Let
\(\operatorname {diam}(C)\) denote the diameter of
C. Consider a continuous functional
\(r_{a}(\cdot, \{x_{n}\}): X \to\mathbb{R^{+}}\) defined by
$$r_{a}\bigl( x, \{x_{n}\}\bigr) = \limsup _{n \to\infty} d( x_{n}, x),\quad x \in X. $$
Then the infimum of
\(r_{a} (\cdot, \{x_{n}\})\) over
C is said to be the
asymptotic radius of
\(\{x_{n}\}\) with respect to
C and is denoted by
\(r_{a}(C, \{x_{n}\})\).
A point
\(z \in C\) is said to be an
asymptotic center of the sequence
\(\{x_{n}\}\) with respect to
C if
$$r_{a} \bigl( z, \{x_{n}\}\bigr) = \inf \bigl\{ r_{a} \bigl( x, \{x_{n}\}\bigr): x \in C \bigr\} , $$
the set of all asymptotic centers of
\(\{x_{n}\}\) with respect to
C is denoted by
\(\operatorname {AC}(C, \{x_{n}\})\). This is the set of minimizers of the functional
\(r(\cdot,\{x_{n}\})\) and it may be empty or a singleton or contain infinitely many points.
If the asymptotic radius and the asymptotic center are taken with respect to X, then these are simply denoted by \(r_{a}( X, \{x_{n}\}) = r_{a}( \{x_{n}\})\) and \(\operatorname {AC}(X, \{x_{n}\})= \operatorname {AC}(\{x_{n}\})\), respectively. We know that, for \(x \in X\), \(r_{a}( x, \{x_{n}\}) = 0 \) if and only if \(\lim_{n \to\infty} x_{n} = x\).
It is well known that every bounded sequence has a unique asymptotic center with respect to each closed convex subset in uniformly convex Banach spaces and even \(\operatorname {CAT}(0)\) spaces.
The following lemma is due to Leuştean [
31] and we know that this property also holds in a complete uniformly convex hyperbolic space.
Recall that a bounded sequence \(\{x_{n}\}\) in a complete uniformly convex hyperbolic space with a monotone modulus of uniform convexity η is said to be regular if \(r_{a}(X, \{x_{n}\}) = r_{a}(X, \{ u_{n}\})\) for every subsequence \(\{u_{n}\}\) of \(\{x_{n}\}\).
It is well known that every bounded sequence in a Banach space (or complete
\(\operatorname {CAT}(0)\) space (see [
28])) has a regular subsequence. Since every regular sequence Δ-converges, we see immediately that every bounded sequence in a complete uniformly convex hyperbolic space with a monotone modulus of uniform convexity
η has a Δ-convergent subsequence. Notice that (see [
32], Lemma 1.10) given a bounded sequence
\(\{x_{n}\} \subset X\), where
X is a complete uniformly convex hyperbolic space with a monotone modulus of uniform convexity
η, such that Δ-
\(\lim_{n} x_{n} = x\) and for any
\(y \in X\) we have
\(y \neq x\), then
$$\lim_{n \to\infty} d( x_{n}, x) < \lim_{n \to\infty} d( y_{n}, y). $$
Clearly,
X satisfies the above condition, which is known in Banach space theory as the Opial property.
3 Main results
Now we will give the definition of \(Fej\acute{e}r\) monotone sequences.
We now define the
S-iteration process in hyperbolic spaces (see [
19]):
Let
C be a nonempty closed convex subset of a hyperbolic space
X and let
T be a mapping of
C into itself. For any
\(x_{1} \in C\), the sequence
\(\{x_{n}\}\) of the
S-iteration process is defined by
$$ \textstyle\begin{cases} x_{n+1} = W(Tx_{n}, Ty_{n}, \alpha_{n}), \\ y_{n} = W(x_{n}, Tx_{n}, \beta_{n}), \quad n \in\mathbb{N}, \end{cases} $$
(3.1)
where
\(\{\alpha_{n}\}\) and
\(\{\beta_{n}\}\) are real sequences such that
\(0 < a \leq\alpha_{n} \),
\(\beta_{n} \leq b < 1\).
We can easily prove the following lemma from the definition of \(SKC\)-mapping.
Now, we are in a position to prove the Δ-convergence theorem.
Now, we will introduce the strong convergence theorems in hyperbolic spaces.
Next, we will give one more strong convergence theorem by using Theorem
3.8. We recall the definition of condition (I) introduced by Senter and Doston [
34].
Let
C be a nonempty subset of a metric space
\((X, d)\). A mapping
\(T : C\to C\) is said to satisfy
condition (I), if there is a nondecreasing function
\(f [0, \infty) \to[0, \infty)\) with
\(f(0) = 0\),
\(f(t) > 0 \) for all
\(t \in(0, \infty)\) such that
$$d(x, Tx) \geq f\bigl(D\bigl(x, F(T)\bigr)\bigr), $$
for all
\(x \in C\), where
\(D( x, F(T)) = \inf\{d(x, p) : p \in F(T)\}\).
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.