Erratum to: Nonlinear Dyn (2013) 71:353–359 DOI 10.1007/s11071-012-0665-y

In the original article [1] there are some errors in the proof of Theorem 1. Now, we point out the mistake as follows:

The inequality (9) [\(2\alpha\tau_{k}+{\rm{ln}}(\xi)<0\), with ξ>1 and τ k >0] in Theorem 1 implies that α<0, where \(\alpha=(L+\frac{1}{2}\lambda-d^{\ast})\). Thus, with the Lyapunov function (12) in [1]

$$\begin{aligned} V(t) = & \frac{1}{2}\sum_{i=1}^Ne_i^T(t)e_i(t)\\ &{}+ \frac{1}{2}\sum_{i=1}^N(\hat{ \varTheta}_i-\varTheta_i)^T(\hat{ \varTheta}_i-\varTheta_i) \\ &{} +\frac{1}{2}\sum_{i=1}^N \frac{(d_i-d_i^\ast)^2}{k_i}, \end{aligned}$$

the enlargement \((L+\frac{1}{2}\lambda-d^{\ast})\sum_{i=1}^{N}e_{i}^{T}(t)e_{i}(t) \leq 2\alpha V(t)\) will not surely hold, which is appeared in the line 5 of the right column on page 356 [1].

For correcting the mistakes in the original paper, we slightly revised them and a correct version of Theorem 1 and Corollary 1 are given in this paper.

FormalPara Theorem 1

Suppose that A1 holds. Let λ be the largest eigenvalue of (CA)+(CA)T, if

$$ \beta_{ik}=\lambda_{\mathrm{max}}\bigl[(I+B_{ik})^T(I+B_{ik}) \bigr]<1 $$
(8)

and there exist a constant ξ>1 such that

$$ 2\alpha\tau_k+{\rm{ln}}(\xi)<0 $$
(9)

under the following restriction conditions

$$ \begin{aligned} & U_i=-d^\ast e_i(t)-\mu\|\hat{\boldsymbol {\Theta}}-\boldsymbol {\Theta}\|^2 \frac{e_i}{\|E\|^2}, \\ &\dot{\hat{\varTheta}}_i=- g_i^T \bigl(t,y_i(t)\bigr)e_i(t),\end{aligned} $$
(10)

where d >0,μ>0, \(\boldsymbol {\Theta}=(\varTheta_{1}^{T},\varTheta_{2}^{T},\ldots,\varTheta_{N}^{T})^{T}\), and \(\hat{\boldsymbol {\Theta}}\) is an estimation vector to Θ, \(E(t)=(e_{1}^{T}(t),e_{2}^{T}(t), \ldots,e_{N}^{T}(t))^{T}\). Then, the impulsively controlled network (4) and (3) in Ref. [1] is asymptotically synchronous. Moreover,

$$ \hat{\varTheta}_i\rightarrow\varTheta_i , \quad i=1,2,\ldots,N. $$
(11)
FormalPara Proof

Construct the Lyapunov function as follows:

$$\begin{aligned} V(t)&= \frac{1}{2}\bigl\| E(t)\bigr\| ^2+\frac{1}{2}\| \hat{\boldsymbol {\Theta}}-\boldsymbol {\Theta}\|^2 \\ &=\frac{1}{2}\sum_{i=1}^Ne_i^T(t)e_i(t) \\ &\quad {}+ \frac{1}{2}\sum_{i=1}^N(\hat{ \varTheta}_i-\varTheta_i)^T(\hat{ \varTheta}_i-\varTheta_i). \end{aligned}$$
(12)

Then

$$\begin{aligned} \dot{V}(t)= \sum_{i=1}^Ne_i^T(t) \dot{e}_i(t)+\sum_{i=1}^N\dot{ \hat{\varTheta}}_i^T(\hat{\varTheta}_i- \varTheta_i), \end{aligned}$$

along with error systems (6) and Assumption 1 in [1], we have

$$\begin{aligned} \dot{V}(t) \leq&\sum_{i=1}^Ne_i^T(t) \bigl[g_i\bigl(t,y_i(t)\bigr)\cdot(\hat{ \varTheta}_i-\varTheta_i) \bigr]\\ &{}+\sum _{i=1}^NL_ie_i^T(t)e_i(t) \\ &{}+\sum_{i=1}^N\sum _{j=1}^N e_i^T(t)c_{ij}Ae_j(t)+ \sum_{i=1}^N e_i^T(t)U_i \\ &{}+\sum_{i=1}^N\dot{\hat{ \varTheta}}_i^T(\hat{\varTheta}_i- \varTheta_i). \end{aligned}$$

Denote L=max i {L i }, and notice that \(E(t)= (e_{1}^{T}(t),e_{2}^{T}(t),\ldots,e_{N}^{T}(t))^{T}\), substitute Eq. (10) into the above inequality, we further have

$$\begin{aligned} \dot{V}(t) \leq& L\sum_{i=1}^Ne_i^T(t)e_i(t)+E^T(t) (C\otimes A)E(t) \\ &{}-\sum_{i=1}^Nd^\ast e_i^T(t)e_i(t)\\ &{}-\mu \sum _{i=1}^N\|\hat{\boldsymbol {\Theta}}-\boldsymbol {\Theta} \|^2e_i^T\frac{e_i}{\|E\|^2}. \end{aligned}$$

In fact, \(\sum_{i=1}^{N}e_{i}^{T}\frac{e_{i}}{\|E\|^{2}}=1\), thus,

$$\begin{aligned} \dot{V}(t) \leq& L\sum_{i=1}^Ne_i^T(t)e_i(t)+E^T(t)\\ &{}\times \frac{(C\otimes A)+(C\otimes A)^T}{2}E(t) \\ &{} -d^\ast\sum_{i=1}^Ne_i^T(t)e_i(t)- \mu\|\hat{\boldsymbol {\Theta}}-\boldsymbol {\Theta}\|^2. \end{aligned}$$

Further,

$$\begin{aligned} \dot{V}(t) \leq&\biggl(L+\frac{1}{2}\lambda-d^\ast\biggr)\sum _{i=1}^Ne_i^T(t)e_i(t)- \mu\|\hat{\boldsymbol {\Theta}}-\boldsymbol {\Theta}\|^2 \\ =&\biggl(L+\frac{1}{2}\lambda-d^\ast\biggr) \bigl\| E(t) \bigr\| ^2-\mu \|\hat{\boldsymbol {\Theta}}-\boldsymbol {\Theta}\|^2 \\ \leq & 2\alpha V(t), \end{aligned}$$

where α=max{ν,−μ}<0 and \(\nu=L+\frac{1}{2}\lambda-d^{\ast}<0\) for large enough d .

This implies that

$$ V(t)\leq V\bigl(t_{k-1}^+\bigr)e^{2\alpha(t-t_{k-1})},\quad t \in (t_{k-1},t_k] $$
(13)

then we get the same inequality as Eq. (13) in [1].

For t=t k , from Eq. (7) [1], we have

$$\begin{aligned} V\bigl(t_k^+\bigr) = &\frac{1}{2}\sum _{i=1}^Ne_i^T(t_k) (I+B_{ik})^T(I+B_{ik})e_i(t_k) \\ &{} +\frac{1}{2}\sum_{i=1}^N(\hat{ \varTheta}_i-\varTheta_i)^T(\hat{ \varTheta}_i-\varTheta_i) \\ \leq & V(t_k), \end{aligned}$$
(14)

due to β ik =λ max[(I+B ik )T(I+B ik )]<1.

From Eqs. (13) and (14), for t∈(t k ,t k+1], there is

$$ V(t)\leq V\bigl(t_0^+\bigr)e^{2\alpha(t-t_0)}. $$
(15)

In virtue of the inequality (9) given in Theorem 1, we know that

$$e^{2\alpha\tau_k}<\frac{1}{\xi},\quad k=1,2,\ldots $$

Thus, the inequality (15) can be further rewritten as

$$\begin{aligned} V(t) \leq& V\bigl(t_0^+\bigr) \bigl(e^{2\alpha\tau_1}\bigr)\cdots \bigl(e^{2\alpha\tau_k}\bigr)e^{2\alpha(t-t_k)} \\ <&V\bigl(t_0^+\bigr)\frac{1}{\xi^k}e^{2\alpha\tau_{k+1}}, \end{aligned}$$
(16)

therefore V(t)→0 as k→∞ because ξ>1, which implies that all the errors e i (t)→0 and \(\hat{\varTheta}_{i}\rightarrow \varTheta_{i}\) (i=1,2,…,N). So the synchronization between the impulsively controlled complex network (4) and network (3) in [1] is realized and the unknown system parameters are identified simultaneously. This completes the proof. □

FormalPara Remark 1

After the synchronization occurs, that is, all the e ij →0 as t→∞, from the error systems (6) in [1], one can get \(g_{i}(t, y_{i}(t))(\hat{\varTheta}_{i}-\varTheta_{i})={0}\), therefore, the conclusion (11) holds under the condition that all the column vectors of g i (t,y i (t)) are linear independent.

FormalPara Corollary 1

If a complex network consists of N identical nodes, which can be described by

$$\begin{aligned} \dot{x}_i(t)=f\bigl(t,x_i(t)\bigr)+g \bigl(t,x_i(t)\bigr)\cdot\varTheta+\sum_{j=1}^N c_{ij}Ax_ j(t), \end{aligned}$$
(17)

then unknown system parameters Θ can be identified by using the estimated values \(\hat{\varTheta}\) with the following impulsively controlled response network

$$ \left \{\begin{array}{l} \displaystyle \dot{y}_i(t)=f(t,y_i(t))+g(t,y_i(t))\cdot\hat{\varTheta}\\ \hphantom{\dot{y}_i(t)=} {}+ \sum_{j=1}^N {c}_{ij}Ay_ j(t)+U_i, \quad t\neq t_k\\ \Delta y_i(t^+)=B_{ik}e_i(t),\quad t=t_k, \ k=1,2,\ldots\\ y_i(t_0^+)=y_{i0},\\ U_i=-d^\ast e_i(t)-\mu\|\hat{{\varTheta}}-{\varTheta}\|^2\frac{e_i}{\|E\|^2},\\ \end{array} \right . $$
(18)

if

$$ 2\alpha\tau_k+{\rm{ln}}(\xi)<0 $$
(19)

and

$$ \dot{\hat{\varTheta}}={-} \sum_{i=1}^Ng^T \bigl(t,y_i(t)\bigr)e_i(t), $$
(20)

where λ is the largest eigenvalue of (CA)+(CA)T, the constant ξ>1 and α is defined as in Theorem 1.

In the numerical simulation, let d =100, μ=1.5, \(B_{ik}=\operatorname{diag}\{-0.5,-0.5,-0.5\}, \tau_{k}=0.1\). Figures 1 and 2, respectively show the synchronization errors of e i1(t),e i2(t),e i3(t) and the identified parameters under the updating laws (20).

Fig. 1
figure 1

Synchronization errors e i (t) (1≤i≤5)

Fig. 2
figure 2

Identification of system parameters