1 Introduction
The Cohen–Grossberg neural network model, proposed by Cohen and Grossberg in 1983 [
1], has been attracting much attention because of its wide application in various engineering fields and because of it being highly inclusive of other neural networks such as Hopfield neural network, cellular neural network, recurrent neural network, and so on. Hence many scholars devoted themselves to the research of this aspect (see [
2‐
18]). In some practical applications and hardware implementations of artificial neural networks, time delays are inevitable due to the finite switching speed of the amplifiers and the interneurons conduction distances, and they are even time-varying and unbounded in some cases such as the memory activation function of the human brain neural network model. Therefore, it is more suitable to introduce unbounded time-varying delays to the neural network, especially to Cohen–Grossberg models, and some results have been reported recently, for example, [
19‐
28].
In the applications of pattern recognition, the addressable memories of patterns are stored as stable equilibrium points. Thus it is necessary that there exist multiple stable equilibrium points for neural networks. The coexistence of multiple equilibrium points and their local stability, which is usually referred to as the multistability of neural network models, has been reported in depth in the last years (see [
29‐
43] and the references therein). Wang et al. in [
35] studied a class of neural networks with
r-level piecewise linear nondecreasing activation functions and showed that the
n-neuron dynamical system had exact
\((2r+1)^{n}\) equilibrium points, of which
\((r +1)^{n}\) were locally exponentially stable and the others were unstable. By using the partition space method, [
41] proved that neural networks with unbounded time-varying delays could exhibit at least
\(3^{n}\) equilibrium points,
\(2^{n}\) of them are locally
μ-stable and others are unstable. In [
43], based on the geometrical configuration of activation functions and mathematic tools, some novel algebraic criteria were proposed to guarantee the coexistence of
\(25^{n}\) equilibrium points, in which
\(9^{n}\) equilibrium points are locally
μ-stable, for the memristor-based complex-valued neural networks with non-monotonic piecewise nonlinear activation functions and unbounded time-varying delays. From the references mentioned above, we find that the multistability of Cohen–Grossberg neural networks with unbounded time-varying delays is a challenging problem.
Motivated by the challenging problem, we investigate the multistability of a Cohen–Grossberg neural network with unbounded time-varying delay and nondecreasing activation functions in this paper and prove that the considered model has
\(3^{n}\) equilibrium points, and
\(2^{n}\) of them are locally
μ-stable, the remaining ones are unstable. Compared with the literature [
41], the results are more general. The rest of this paper is organized as follows. In Sect.
2, the Cohen–Grossberg model and some preliminaries are given. The main results are presented and proved in Sect.
3. The corollaries and comparison with the results of existing literature are presented in Sect.
4. A numerical example with its simulation is showed in Sect.
5 to illustrate the effectiveness the proposed results. Finally, conclusions are drawn in Sect.
6.
2 Preliminaries
In this paper, the following Cohen–Grossberg neural network is considered.
$$ \frac{\mathrm{d}x_{i}(t)}{\mathrm{d}t} =-a_{i} \bigl(x_{i}(t) \bigr) \Biggl[b_{i} \bigl(x _{i}(t) \bigr)-\sum _{j=1}^{n}c_{ij}g_{j} \bigl(x_{j}(t) \bigr)-\sum_{j=1}^{n}d_{ij}f _{j} \bigl(x_{j} \bigl(t-\tau (t) \bigr) \bigr)+I_{i} \Biggr], \quad t \ge 0, $$
(1)
where
\(i=1,2,\ldots ,n\),
\(x_{i}(t)\) denotes the state variable associated with the
ith neuron at time
t;
\(a_{i}(x_{i}(t))\) represents an amplification function at time
t;
\(b_{i}(x_{i}(t))\) is an appropriate inhibition behavior function at time
t such that the solutions of model (
1) remain bounded;
\(g_{j}(x_{j}(t))\) and
\(f_{j}(x_{j}(t-\tau (t)))\) denote the activation functions of the
jth neuron unit at time
t without and with time delays, respectively, and
\(C=(c_{ij})_{n\times n}\) and
\(D=(d_{ij})_{n\times n}\) are the corresponding connection weights matrices;
\(\tau (t)\) corresponds to the transmission delay and satisfies
\(\tau (t)\geq 0\);
\(I_{i}\) is the constant external input of the network on the
ith neuron.
The initial conditions of model (
1) are assumed to be
\(x_{i}(s)= \varphi _{i}(s)\),
\(s\leq 0\),
\(i=1,2,\ldots ,n\), where
\(\varphi _{i}(s)\) is a real-valued continuous function bounded on
\((-\infty , 0]\), except that finite points existing at the left and right limits are continuous to the right. Throughout this paper, we make the following assumptions.
(H1)
For each
\(i \in 1,2,\ldots ,n \), the amplification function
\(a_{i}(u)\) is nonnegative continuous and satisfies
$$ 0< \underline{a}_{i}\leq a_{i}(u)\leq \bar{a}_{i}< \infty ,\quad u\in R, i=1,2, \ldots ,n. $$
And let two
n-dimensional positive diagonal matrices
\(\hat{A}= \operatorname{diag}\{\bar{a}_{1},\bar{a}_{2},\ldots ,\bar{a}_{n}\}\) and
\(\check{A}=\operatorname{diag}\{ \underline{a}_{1}, \underline{a}_{2}, \ldots , \underline{a}_{n}\}\).
(H2)
\(b_{i}(u)\) is an odd function and monotone increasing, and there exists an
n-dimensional positive diagonal matrix
\(B=\operatorname{diag}\{ {b}_{1},{b}_{2},\ldots ,{b}_{n}\}\) such that
$$ \frac{b_{i}(u)-b_{i}(v)}{u-v}\geq b_{i},\quad u, v\in R, u\neq v, i=1,2, \ldots ,n. $$
(H3)
\(g_{j}(\cdot )\) and
\(f_{j}(\cdot )\) are nondecreasing sigmoid continuous nonlinear function or nondecreasing piecewise continuous linear function, and there exist constants
\(p_{j}\leq q_{j}\),
\(m_{j} \leq M_{j}\),
\(m_{j}^{\prime }\leq M_{j}^{\prime }\),
\(m_{j}^{\prime \prime } \leq M_{j}^{\prime \prime }\), so that
$$\begin{aligned}& m_{j}^{\prime }= {\lim_{x \rightarrow -\infty }} {g_{j}(x)},\qquad M_{j}^{\prime }= {\lim _{x \rightarrow +\infty }} {g_{j}(x)}, \\& m_{j}^{\prime \prime }= {\lim_{x \rightarrow -\infty }} {f_{j}(x)},\qquad M_{j}^{\prime \prime }= {\lim _{x \rightarrow +\infty }} {f_{j}(x)}, \\& 0\leq \underline{\sigma }_{j}^{l} \leq \frac{g_{j}(u)-g_{j}(v)}{u-v}\leq \bar{\sigma }_{j}^{l},\qquad 0\leq \underline{ \delta }_{j}^{l} \leq \frac{f_{j}(u)-f_{j}(v)}{u-v}\leq \bar{\delta } _{j}^{l},\quad \forall u,v \in (-\infty , p_{j}), \\& 0\leq \underline{\sigma }_{j}^{m} \leq \frac{g_{j}(u)-g_{j}(v)}{u-v}\leq \bar{\sigma }_{j}^{m},\qquad 0\leq \underline{ \delta }_{j}^{m} \leq \frac{f_{j}(u)-f_{j}(v)}{u-v}\leq \bar{\delta } _{j}^{m},\quad \forall u,v \in [p_{j},q_{j}], \\& 0\leq \underline{\sigma }_{i}^{r} \leq \frac{g_{j}(u)-g_{j}(v)}{u-v}\leq \bar{\sigma }_{j}^{r},\qquad 0\leq \underline{ \delta }_{j}^{r} \leq \frac{f_{j}(u)-f_{j}(v)}{u-v}\leq \bar{\delta } _{j}^{r},\quad \forall u,v \in ( p_{j},+\infty ), \end{aligned}$$
where
\(m_{j}=\min \{ m_{j}^{\prime }, m_{j}^{\prime \prime }\}\),
\(M_{j}=\min \{M_{j}^{\prime }, M_{j}^{\prime \prime }\}\),
\(\bar{\sigma }_{j}=\max \{\bar{\sigma }_{j}^{l}, \bar{\sigma }_{j}^{m}, \bar{\sigma }_{j}^{r} \}\),
\(\bar{\delta }_{j}=\max \{{\bar{\delta }}_{j}^{l},{\bar{ \delta }}_{j}^{m},{\bar{\delta }}_{j}^{r} \}\),
\(j=1,2,\ldots ,n\), and define two
n-dimensional positive diagonal matrices
\(\varSigma ^{g}= \operatorname{diag}\{\bar{\sigma }_{1},\bar{\sigma }_{2},\ldots ,\bar{ \sigma }_{n}\}\) and
\(\Delta ^{f}=\operatorname{diag}\{\bar{\delta }_{1},\bar{ \delta }_{2},\ldots , \bar{\delta }_{n}\}\). The superscripts “
l”, “
m”, “
r” denote “left”, “middle”, and “right”, respectively.
It is not hard to find such activation functions as the sigmoid continuous nonlinear function
\(f(x)=\tanh (x)=\frac{e^{x}-e^{-x}}{e ^{x}+e^{-x}}\), piecewise continuous linear function
\(g(x)= \frac{|x+1|-|x+1|}{2}\), which are different functions, but the properties of the functions can be discussed by common interval separation points. Based on the geometric structure of the activation function, we can define the interval of a one-dimensional real number space as follows:
$$ (-\infty ,+\infty )=(-\infty ,p_{i})\cup [p_{i},q_{i}] \cup (q_{i},+ \infty ),\quad i=1,2,\ldots ,n, $$
then the
n-dimensional real number space
\(R^{n}\) can be divided into
\(3^{n}\) non-intersection subregions. For convenience, let
Φ denote the set of these subregions, and so
$$ \varPhi = \Biggl\{ \prod_{i=1}^{n} w_{i}\mid w_{i}=(-\infty ,p_{i}), [p _{i},q_{i}]\mbox{ or }(q_{i},+\infty ) \Biggr\} . $$
For each
\(\prod_{i=1}^{n} w_{i}\in \varPhi \), we define the following index subsets with respect to different interval as
\(N_{1}=\{i\mid w_{i}=(-\infty , p_{i}), i=1,2,\ldots ,n\}\),
\(N_{2}=\{i\mid w_{i}=[p_{i}, q_{i}],i=1,2,\ldots ,n\}\),
\(N_{3}=\{i \mid w_{i}=(q_{i}, +\infty ), i=1,2,\ldots ,n\}\).
Furthermore, Φ can be separated into two parts: \(\varPhi _{1}=\{ \prod_{i=1}^{n} w_{i}\mid w_{i}=(-\infty ,p_{i})\mbox{ or }(q_{i}, + \infty )\}\), \(\varPhi _{2}=\varPhi -\varPhi _{1}\). Obviously, \(\varPhi _{1}\) is composed of \(2^{n}\) subregions and \(\varPhi _{2}\) contains \(3^{n}-2^{n}\) subregions.
To facilitate showing the existence of equilibrium points of model (
1), we define new sets
$$\begin{aligned}& \varOmega = \Biggl\{ \prod_{i=1}^{n} v_{i}\mid v_{i}=[-E_{i},p_{i}], [p _{i},q_{i}]\mbox{ or }[q_{i},E_{i}] \Biggr\} , \\& \varOmega _{1}= \Biggl\{ \prod_{i=1}^{n} v _{i}\mid v_{i}=[-E_{i},p_{i}]\mbox{ or } [q_{i},E_{i}] \Biggr\} , \end{aligned}$$
where
\(E_{i}=2 b_{i}^{-1}[\sum_{j=1}^{n}(|c_{ij}|+|d_{ij}|) \max \{m_{j}, M_{j}\}+|I_{i}|+\max \{|b_{i}(p_{i})|, |b_{i}(q_{i}|\}]\),
\(i=1,2,\ldots ,n\).
For each \(\prod_{i=1}^{n} v_{i}\in \varOmega \), we can similarly define its index subsets: \(N_{1}^{\prime }=\{i\mid v_{i}=[-E_{i}, p_{i}], i=1,2, \ldots ,n\}\), \(N_{2}^{\prime }=\{i\mid v_{i}=[p_{i}, q_{i}], i=1,2,\ldots ,n \}\), \(N_{3}^{\prime }=\{i\mid v_{i}=[q_{i}, E_{i}], i=1,2,\ldots ,n\}\).
3 Main results
Because an equilibrium point of system (
1) is a constant satisfying the equation
\(b_{i}(x_{i}(t))-\sum_{j=1}^{n}c_{ij}g_{j}(x_{j}(t))- \sum_{j=1}^{n}d_{ij}f_{j}(x_{j}(t))+I_{i}=0\), it is obvious that of model (
1) has the same equilibrium point with the following system:
$$ \frac{\mathrm{d}x_{i}(t)}{\mathrm{d}t} =-a_{i} \bigl(x_{i}(t) \bigr) \Biggl[b_{i} \bigl(x _{i}(t) \bigr)-\sum _{j=1}^{n}c_{ij}g_{j} \bigl(x_{j}(t) \bigr)-\sum_{j=1}^{n}d_{ij}f _{j}(x_{j}(t)+I_{i} \Biggr], \quad t \ge 0, $$
(2)
for
\(i=1,2,\ldots ,n\). Therefore, we can investigate the existence of multiple equilibrium points of model (
2) instead of (
1).
For each equilibrium point
\(x^{\star }=(x^{\star }_{1},\ldots ,x^{ \star }_{n})\) of
\(\prod_{i=1}^{n} w_{i}\in \varPhi _{1}\), we define its
μ-stability in
\(\prod_{i=1}^{n} w_{i}\) (local
μ-stability in
\(\varPhi _{1}\)), and prove the
μ-stability of all equilibrium points in
\(\varPhi _{1}\) in the following Definition
1 and Theorem
2, respectively.
Next, we show that there exists an unstable equilibrium point in \(\varPhi _{2}\).
6 Conclusion
Stability of multiple unstable Cohen–Grossberg neural networks with unbounded time-varying delays is discussed analytically in this paper. Based on the geometric structure of two different activation functions and some rigorous mathematical analysis, the present paper proved that there exist multiple equilibrium points in the model, some of which are unstable, others are
μ-stable. One numerical example and its simulation show the effectiveness of the conclusion. Here, we also need point out the following. On the one hand, the impulsive control is rarely used to deal with cases of unbounded time-varying delays, especially for multiple unstable Cohen–Grossberg neural networks with unbounded time-varying delays. Therefore, the stability under impulsive control of multiple unstable Cohen–Grossberg neural networks with unbounded time-varying delays is still a challenging problem. On the other hand, we use something like positivity-based method to study the stability of Cohen–Grossberg neural networks in this article. The positivity-based method is a valid approach for difference and delay differential systems (see [
24‐
28,
44‐
48]). Therefore, the research on stability with positivity-based approach is an interesting and meaningful topic, and we will also consider the stability of other neural networks by employing the positivity-based approach in the near future.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.