In this paper, we introduce an algorithm for solving classical variational inequalities problem with Lipschitz continuous and monotone mapping in Banach space. We modify the subgradient extragradient methods with a new and simple iterative step size, the strong convergence of algorithm is established without the knowledge of the Lipschitz constant of the mapping. Finally, a numerical experiment is presented to show the efficiency and advantage of the proposed algorithm. Our results generalize some of the work in Hilbert spaces to Banach spaces.
Hinweise
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1 Introduction
The variational inequality problem (VIP) which was first introduced by Hartman and Stampacchia [1] in 1966, is a very important tool in studying engineering mechanics, physics, economics, optimization theory and applied sciences in a unified and general framework (see [2, 3]). Under appropriate conditions, there are two general approaches for solving the variational inequality problem, one is the regularized method and the other is the projection method. Many projection-type algorithms for solving the variational inequalities problem have been proposed and analyzed by many authors [4‐22]. The gradient method is the simplest algorithm in which only one projection on feasible set is performed, and the convergence of the method requires a strongly monotonicity. To avoid the hypothesis of the strongly monotonicity, Korpelevich [4] proposed an algorithm for solving the variational inequalities in Euclidean space, which was called the extragradient-type method. The subgradient extragradient-type algorithm was introduced by Censor et al. in [5] for solving variational inequalities in real Hilbert space. Yao et al. in [6] proposed an iterative algorithm for solving a common solution of the pseudomonotone variational inequalities and fixed point of pseudocontractive operators in Hilbert spaces.
In the past, most variational inequalities were in Euclidean or Hilbert space, recently, extragradient-type method was extended from Hilbert spaces to Banach spaces (see [23‐27]). In [23] they used the subgradient extragraduent method and Halpern method to propose an algorithm for solving variational inequalities in Banach spaces. In [24], they proposed a splitting algorithm for finding a common zero of a finite family of inclusion problems of accretive operators in Banach space. Inspired by the work mentioned, in this work, we extend subgradient extragradient algorithm proposed by [8] for solving variational inequalities from Hilbert spaces to Banach spaces. It is worth stressing that our algorithm has a simple structure and the convergence of algorithms is not required to know the Lipschitz constant of the mapping. The paper is organized as follows. In Sect. 2, we present some preliminaries that will be needed in the sequel. In Sect. 3, we propose an algorithm and analyze its convergence. Finally, in Sect. 4 we present a numerical example and comparison.
Anzeige
2 Mathematical preliminaries
This section we will introduce some definitions and basic results that will be used in our paper. Assume that X is a real Banach space with its dual \(X^{\ast }\), \(\| \cdot \| \) and \(\| \cdot \| _{\ast }\) denote the norms of X and \(X^{\ast }\), respectively, \(\langle x, x^{\ast } \rangle \) the duality coupling in \(X\times X^{\ast }\) for all \(x^{\ast } \in X^{\ast }\) and \(x\in X\), \(x_{n}\longrightarrow x\) strong convergence of a sequence \(\{ x_{n}\}\) of X to \(x\in X\), \(x_{n}\rightharpoonup x \) weak convergence of a sequence \(\{ x_{n}\}\) of X to \(x\in X\). \(S_{X}\) denote the unit sphere of X, and \(B_{X}\) the closed unit ball of X. Let C be a nonempty closed convex subset of X, its closure be denoted by C̄ and \(F: C\longrightarrow X^{\ast }\) be a continuous mapping. Consider with the variational inequality (for short, \(\operatorname{VI}(F, C)\)) which consists in finding a point \(x \in C\) such that
$$ \bigl\langle F(x),y-x\bigr\rangle \geq 0, \quad \forall y\in C. $$
(1)
Let S be the solution set of (1). Finding a solution of S is fundamental problem in optimization theory. It is well known that x is the solution of the \(\operatorname{VI}(F,C)\) if and only if x is the solution of the fixed-point equation \(x=P_{C}(x-\lambda F(x))\), where λ is an arbitrary positive constant. Therefore, the knowledge of fixed-point algorithms can be used to solve \(\operatorname{VI}(F, C)\).
We next recall some properties of the Banach space. Let X be a real Banach space and \(X^{*}\) be the corresponding dual space.
Definition 1
Assume that \(C\subseteq X\) is a nonempty set, \(F: C\longrightarrow X ^{\ast }\) is a continuous mapping, then
The mapping F is Lipschitz-continuous with constant \(L>0\), i.e., there exists \(L>0\) such that
$$ \bigl\Vert F(x)-F(y) \bigr\Vert \leq L \Vert x-y \Vert , \quad \forall x, y\in C. $$
(3)
\((A3)\)
([28]) The mapping F is called hemicontinuous of C into \(X^{*}\) iff for any \(x,y \in C\) and \(z \in X\) the function \(t\mapsto \langle z,F(tx+(1-t)y)\rangle \) of \([0, 1]\) into \(\mathcal{R}\) is continuous.
The normalized duality mapping\(J_{X}\) (usually written J) of X into \(X^{*}\) is defined by
for all \(x\in X\). Let \(q \in (0,2]\). The generalized duality mapping\(J_{q}:X \rightarrow 2^{X^{*}}\) is defined (for the definitions and properties, see [24]) by
exists. In this case, the space X is also called smooth. We know that X is smooth iff J is a single-valued mapping of X into \(X^{*}\), X is reflexive iff J is surjective, and X is strictly convex iff J is one-to-one. Therefore, if X is a smooth, strictly convex and reflexive Banach space, then J is a single-valued bijection, and then there exist the inverse mapping \(J^{-1}\) coincides with the duality mapping \(J^{*}\) on \(X^{*}\). More details can be found in [29‐31]. If (4) converges uniformly in \(x, y\in S_{X}\), X is said to be uniformly smooth. It is said to be strictly convex if \(\| \frac{x+ y}{2}\| <1\) whenever \(x, y\in S_{X}\) and \(x\neq y\). The modulus \(\delta _{X}\) of convexity is defined by
for all \(\varepsilon \in [0,2]\). A Banach space X is said to be uniformly convex if \(\delta _{X}(\varepsilon )> 0\). It is well known that a Banach space X is uniformly convex if and only if for any two sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) in X such that
hold. A uniformly convex Banach space is strictly convex and reflexive. By [24] we know that a Banach space X is smooth if and only if the duality mapping \(J_{q}\) is single valued and is uniformly smooth if and only if the duality mapping \(J_{q}\) is single valued and norm-to-norm uniformly continuous on bounded sets of X. Moreover, if there exists \(c>0\) such that, for all \(\varepsilon \in [0,2]\), \(\delta _{X}(\varepsilon )> c\varepsilon ^{2}\), then X is said to be 2-uniformly convex. It is obvious that every 2-uniformly convex Banach space is uniformly convex and all Hilbert spaces are uniformly smooth and 2-uniformly convex, and therefore are reflexive.
Now, we recall some useful definitions and results. Firstly, let us introduce the generalized projection operator of X. Let \(C\subseteq X\) be a nonempty closed convex subset of a real uniformly convex Banach space X. Then we know that, for any \(z\in X\), there exists a unique element \(\tilde{z} \in C\) such that \(\|z-\tilde{z}\|\leq \|z-y\|\) for all \(y\in C\). Putting \(\tilde{z}=P_{C}z\), the operator \(P_{C}: X^{*} \longrightarrow C \subset X\) is called the generalized projection (or metric projection) operator of X onto C.
To avoid the hypothesis of the strongly monotonicity, Korpelevich [4] give the extragradient-type method:
The subgradient extragradient-type algorithm extend (6) in which the second orthogonal projection onto some constructible set in Euclidean space for solving \(\operatorname{VI}(F, C)\) in real Hilbert space. Their method is of the following form:
where \(T_{n}=\{x\in X|\langle x_{n}-\lambda F(x_{n})-y_{n},x-y_{n} \rangle \leq 0\}\) and \(\lambda \in (0,\frac{1}{L})\). Cai et al. [23] suggested the following method:
where J is the normalized duality mapping of X into \(X^{*}\), \(\lambda _{n} \in (0,\frac{1}{L})\), \({\alpha _{n}}\subset (0,1)\), \(\alpha _{n}\rightarrow 0\) and \(\sum_{n=1}^{\infty }\alpha _{n}=+\infty \). They proved that the sequence \(\{x_{n}\}\) generated by (8) converges strongly to \(P_{S}Jx_{0}\).
The main drawback of algorithms (7) and (8) is a requirement to know the Lipschitz constant or to know some estimation of it. Yekini and Olaniyi [7] proposed the following subgradient extragradient method:
$$ \textstyle\begin{cases} \text{Given } \rho \in (0, 1), \mu \in (0, 1), \\ y_{n}=P_{C}(x_{n}-\lambda _{n} F(x_{n})), \text{where } \lambda _{n}= \rho ^{l_{n}} \text{ and } l_{n} \text{ is the smallest nonnegative inter } l \\ \text{such that } \lambda _{n} \Vert F(x_{n})-F(y_{n}) \Vert \leq \mu \Vert x_{n}-y_{n} \Vert , \\ z_{n}=P_{T_{n}}(x_{n}-\lambda _{n} F(y_{n})), \text{where } T_{n}=\{x\in H|\langle x_{n}-\lambda _{n} F(x_{n})-y _{n},x-y_{n} \rangle \leq 0\}, \\ x_{n+1}=\alpha _{n}f(x_{n})+(1-\alpha _{n})z_{n}, \text{where } f:H \rightarrow H \text{ is a contraction mapping}. \end{cases} $$
(9)
The algorithm of (9) does not require one to know the Lipschitz constant, but the method may involve computation of additional projections.
In [32], Alber introduced a functional \(V(x^{*},y): X^{*} \times X \longrightarrow R\) by
The operator \(P_{C}: X^{*}\longrightarrow C\subseteq X\) is said to be generalized projection operator if it associates to an arbitrary fixed point \(x^{*}\in X^{*}\), the solution to the minimization problem
where \(\tilde{x^{*}}=P_{C}x^{*}\in C \subset X\) is called a generalized projection of the point \(x^{*}\). For more results about \(P_{C}\), see [32]. The next lemma can describe the properties of \(P_{C}\).
Lemma 1
LetCbe a nonempty closed convex set inXand\(x^{*}, y^{*} \in X^{*}\), \(\tilde{x^{*}}=P_{C}x^{*}\). Then
$$\begin{aligned} \mathrm{(i)}& \quad \bigl\langle J\tilde{x^{*}}-x^{*}, y- \tilde{x^{*}}\bigr\rangle \geq 0, \quad \forall y\in C. \\ \mathrm{(ii)}&\quad V\bigl(J\tilde{x^{*}},y\bigr)\leq V \bigl(x^{*},y\bigr)-V\bigl(x^{*},\tilde{x^{*}} \bigr), \quad \forall y\in C. \\ \mathrm{(iii)}&\quad V\bigl(x^{*},z\bigr)+2\bigl\langle J^{-1}x^{*}-z, y^{*}\bigr\rangle \leq V \bigl(x^{*}+ y^{*},z\bigr),\quad \forall z\in X. \end{aligned}$$
In [32], Alber also introduced the Lyapunov functional\(\varphi : X \times X\longrightarrow R\) by
$$ \varphi (x,y)= \Vert x \Vert ^{2}-2 \langle Jx,y \rangle + \Vert y \Vert ^{2}, \quad \forall x,y\in X. $$
Then, combining (10), we obtain \(V(x^{*},y)=\varphi (J^{-1}x^{*},y)\), for all \(x^{*}\in X^{*}\), \(y\in X\). Moreover, we have the following lemma (see [33]).
Let\(\{a_{n}\}\)be a sequence of real numbers that does not decrease at infinity in the sense that there exists a subsequence\(\{a_{n_{j}}\}\)of\(\{a_{n}\}\)which satisfies\(a_{n_{j}}< a_{{n_{j}}+1} \)for all\(j\in \mathcal{N}\). Define the sequence\(\{\tau (n)\}_{n \geq n_{0}}\)of integers as follows:
$$ \tau (n)=\max \{k\leq n: a_{k}< a_{k+1}\}, $$
where\(n_{0} \in \mathcal{N}\)such that\(\{k\leq n_{0}: a_{k}< a_{k+1} \}\)is nonempty. Then the following hold:
LetCbe a nonempty convex subset of a topological vector spaceXand\(F:C\rightarrow X^{*}\)be a hemicontinuous mapping, then\(x^{*}\)is a solution of (1) if and only if
$$ \bigl\langle F\bigl(x^{*}\bigr),y-x^{*}\bigr\rangle \geq 0, \quad \forall y\in C. $$
(11)
3 Main results
In this section, we introduce a new iterative algorithms for solving monotone variational inequality problems in Banach spaces. In order to present the method and establish its convergence, we make the following assumption.
Assumption 1
(a)
The feasible set C is a nonempty closed convex subset of a real 2-uniformly convex Banach space X.
(b)
\(F:X\rightarrow X^{\ast }\) is a monotone on C and L-Lipschitz continuous on X.
(c)
The solution set S of \(\operatorname{VI}(F,C)\) is nonempty.
Now, we discuss the strong convergence using the following algorithm for solving monotone variational inequality. Our algorithms are of the following forms.
Algorithm A
(Step 0)
Take \(\lambda _{0}>0\), \(x_{0}\in X\) be a given starting point, \(\mu \in (0, 1) \).
We prove the strong convergence theorem for Algorithm A. Firstly, we give the following theorem, which plays a crucial role in the proof of the main theorem.
Theorem 1
Assume that Assumption 1holds and\(x_{n}\), \(y_{n}\), \(\lambda _{n}\)be the sequences generated by Algorithm A, then we have the following result:
The sequence\(\{\lambda _{n}\} \)is a monotonically decreasing sequence with lower bound\(\min \{{\frac{\mu }{L },\lambda _{0} }\}\), and therefore, the limit of\(\{\lambda _{n}\} \)exists and is denoted\(\lambda =\lim_{n\rightarrow \infty }\lambda _{n}\). It is obvious that\(\lambda >0\).
Proof
(1) If \(x_{n}=y_{n}\), then \(x_{n}=P_{C}(Jx_{n}- \lambda _{n} F(x_{n}))\), so \(x_{n} \in C\). By the characterization of the generalized projection \(P_{C}\) onto C, we have
$$ \bigl\langle Jx_{n}-\lambda _{n} F(x_{n})-Jx_{n},x_{n}-x \bigr\rangle \geq 0 \quad \forall x \in C. $$
(2) It is obvious that \(\{\lambda _{n}\} \) is a monotonically decreasing sequence. Since F is a Lipschitz-continuous mapping with constant \(L>0\), in the case of \(\langle F(x_{n})-F(y_{n}),x_{n+1}-y_{n} \rangle >0\), we have
Clearly, the sequence \(\{\lambda _{n}\} \) has the lower bound \(\min \{{\frac{\mu }{L },\lambda _{0} }\}\).
Since \(\{\lambda _{n}\} \) is monotonically decreasing sequence and has the lower bound, the limit of \(\{\lambda _{n}\} \) exists, and we denote \(\lambda =\lim_{n\rightarrow \infty }\lambda _{n}\). Clearly, \(\lambda >0\). □
The following lemma plays a crucial role in the proof of Theorem 2.
Lemma 7
Assume that Assumption 1holds. Let\(\{x_{n}\}\)be a sequence generated by Algorithm Aand\(\{\alpha _{n}\}\subset (0, 1)\). Then the sequence\(\{x_{n}\}\)is bounded.
By Theorem 1(2), we get \(\lim_{n\rightarrow \infty }\lambda _{n}\frac{\mu }{\lambda _{n+1}}=\mu (0<\mu <1)\), which means that there exists a integer number \(N_{0}>0\), such that, for every \(n>N_{0}\), we have \(0<\lambda _{n}\frac{\mu }{\lambda _{n+1}}<1\), taking this result in (19), we obtain, for every \(n>N_{0}\),
Hence, \(\{V(Jx_{n}, u) \}\) is bounded. Since \(V(Jx_{n}, u)\geq \frac{1}{ \mu }\|x_{n}-u \|^{2}\), we see that \(\{x_{n}\}\) is bounded. □
Theorem 2
Assume that Assumption 1holds, the sequence\(\{\alpha _{n}\}\)satisfies\(\{\alpha _{n}\}\subset (0, 1)\), \(\sum_{n=0}^{\infty }\alpha _{n}= \infty \)and\(\lim_{n\rightarrow \infty }\alpha _{n}=0 \). Let\(\{x_{n}\}\)be a sequence generated by Algorithm A. Then\(\{x_{n}\}\)strongly converges to a solution\(x^{*}=P_{S}Jx_{0}\).
$$\begin{aligned} \bigl\langle Jx_{0}-Jx^{*},z-x^{*}\bigr\rangle \leq 0, \quad \forall z\in S. \end{aligned}$$
By the proof of Theorem 1, we get \(\exists N_{0}\geq 0\), such that \(\forall n\geq N_{0}\), \(V(Jz_{n}, x^{*})\leq V(Jx_{n}, x^{*})\).
From Theorem 1(1), we see that the sequence \(\{x_{n}\}\) is bounded, consequently, \(\{y_{n}\}\) and \(\{z_{n}\}\) are bounded. Moreover, by (19), we see that there exists \(N_{0}\geq 0\), such that, for every \(n\geq N_{0}\),
Case 1 As in Lemma 5, set \(a_{n}=\varphi (x_{n}, x^{*})\). By Theorem 1(1), we know that there exists \(N_{1}\in \mathcal{N}\) (\(N_{1} \geq N_{0}\)), such that the sequence \(\{\varphi (x_{n}, x^{*})\}^{ \infty }_{n=N_{1}}\) is nonincreasing. Then \(\{a_{n}\}^{\infty }_{n=1}\) converges, using this in (20), we obtain, when \(n> N_{1}\geq N_{0}\),
Since \(J^{-1}\) is norm-to-norm uniformly continuous on bounded subset of \(X^{*}\), we have \(\|x_{n+1}-z_{n}\|\longrightarrow 0\). Therefore, we get
By Theorem 1(1), we know that \(\{x_{n}\}\) is bounded, then there exists a subsequence \(\{x_{n_{k}}\}\) that converges weakly to some \(z_{0} \in X\), such that \(x_{n_{k}}\rightharpoonup z_{0}\) and
Then \(y_{n_{k}}\rightharpoonup z_{0}\) and \(z_{0}\in C\). Since F is monotone and \(y_{n_{k}}=P_{C}(x_{n_{k}}-\lambda _{n_{k}} F(x_{n_{k}}))\), by Lemma 1(i), we have \(\langle Jx_{n_{k}}-\lambda _{n_{k}}F(x_{n _{k}})-Jy_{n_{k}}, z-y_{n_{k}}\rangle \leq 0\), \(\forall z\in C \). That is, for all \(z\in C\),
Let \(k\rightarrow \infty \), using the facts that \(\lim_{k\rightarrow \infty }\|y_{n_{k}}-x_{n_{k}}\|=0\), \(\{y_{n_{k}} \}\) is bounded and \(\lim_{k\rightarrow \infty }\lambda _{n_{k}}= \lambda >0\), we obtain \(\langle F(z), z-z_{0}\rangle \geq 0\), \(\forall z\in C\). By Lemma 6, we have \(z_{0}\in S\).
It follows from Lemma 5 and Lemma 4 that \(\lim_{n\rightarrow \infty }\varphi (x_{n}, x^{*})=0\), which means
$$ \lim_{n\rightarrow \infty }x_{n}=x^{*}. $$
Case 2 Suppose that there exists a subsequence \(\{x_{n_{j}} \}\) of \(\{x_{n}\}\) such that \(\varphi (x_{m_{j}},x^{*})<\varphi (x _{m_{j}+1},x^{*})\) for all \(j\in \mathcal{N}\). From Lemma 3, there exists a nondecreasing sequence \(m_{k} \in \mathcal{N}\) such that \(\lim_{n\rightarrow \infty }m_{k}=\infty \) and the following inequalities hold for all \(k\in \mathcal{N}\):
Since \(\{x_{n_{k}}\}\) is bounded, there exists a subsequence \(\{x_{m_{k}}\}\) of \(\{x_{n_{k}}\}\) which converges weakly to \(z_{0} \in X\). Using the same argument as in the proof of Case 1, and Combining (25) and \(\lim_{k\rightarrow \infty }(1-\lambda _{m _{k}}\frac{\mu }{\lambda _{{m_{k}}+1}}) =1-\mu >0\), we obtain
we obtain \(\limsup_{k\rightarrow \infty }\varphi (x_{m_{k}}, x ^{*})=0\), which means \(\lim_{k\rightarrow \infty }\|x_{m_{k}}-x ^{*}\|^{2}=0\). Since \(\|x_{k}-x^{*}\|\leq \|x_{{m_{k}}+1}-x^{*}\|\), we have \(\lim_{k\rightarrow \infty }\|x_{k}-x^{*}\|=0\). Therefore \(x_{k}\rightarrow x^{*}\). This concludes the proof. □
4 Numerical experiments
In this section, we present two numerical experiments relative to the variational inequalities.
Example 4.1
We compare the proposed algorithm with the Algorithm 3.5 in [23]. For Algorithm A and Algorithm 3.5 in [23], we take \(\alpha _{n}=\frac{1}{100(n+2)}\). To terminate the algorithms, we use the condition \(\|y_{n}-x_{n}\|\leq \varepsilon \) and \(\varepsilon =10^{-3}\) for all the algorithms.
Let \(H=L^{2}([0,2\pi ]) \) with norm \(\|x\|=(\int _{0}^{2\pi }|x(t)|^{2}\,dt)^{ \frac{1}{2}} \) and inner product \(\langle x,y\rangle = [4]\int _{0}^{2 \pi }x(t)y(t)\,dt\), \(x, y\in H\). The operator \(F:H\rightarrow H\) is defined by \(Fx(t)=\max (0,x(t))\), \(t\in [0,2\pi ]\) for all \(x\in H\). It can be easily verified that F is Lipschitz-continuous and monotone. The feasible set is \(C=\{x\in H: \int _{0}^{2\pi }(t^{2}+1)x(t)\,dt \leq 1 \}\). Observe that \(0\in S\) and so \(S\neq \emptyset \). We take \(\lambda _{0}=0.7 \) and \(\mu =0.9\) for Algorithm A. For Algorithm 3.5 in [23], we take \(\lambda =0.7 \). The numerical results are showed in Table 1.
Table 1
Comparison between the Algorithm A and Algorithm 3.5 in [23]
The example is classical. The feasible set is \(C=R^{m}\) and \(F(x)=Ax\), where A is a square \(m\times m\) matrix given by the condition
$$ a_{i,j}= \textstyle\begin{cases} -1 , & \text{if } j=m+1-i \text{ and } j>i, \\ 1, & \text{if } j=m+1-i \text{ and } j< i, \\ 0, & \text{otherwise}. \end{cases} $$
This is a classical example of a problem where the usual gradient method does not converge. For even m, the zero vector is the solution of the Example 4.1. We take \(\lambda =0.7\) (\(\lambda =0.9\)) and \(\mu =0.9\) for Algorithm A. For Algorithm 3.5 in [17], we take \(\lambda =0.7 \) and \(L=1\). For all tests, we take \(x_{0}=(1,\ldots ,1)\). The numerical results are showed in Table 2.
Table 2
Comparison between Algorithm A and Algorithm 3.5 in [17]
Tables 1 and 2 illustrate that algorithms may behave similarly to when we have knowledge of the Lipschitz constant.
5 Conclusions
In this paper, we consider a strong convergence result for monotone variational inequalities problem with Lipschitz continuous and monotone mapping in uniformly convex Banach spaces. Our algorithm is based on the subgradient extragradient methods with a new step size, the convergence of algorithm is established without the knowledge of the Lipschitz constant of the mapping. Our results extend the results of Yang and Liu in [8] from Hilbert spaces to uniformly convex Banach spaces which are also uniformly smooth and show strong convergence. Finally, a numerical experiment demonstrates the validity and advantage of the proposed method.
Acknowledgements
The authors would like to express their sincere thanks to the editors.
Availability of data and materials
Not applicable for this section.
Competing interests
The author declares to have no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.