Skip to main content
Erschienen in: Neural Computing and Applications 7-8/2014

01.06.2014 | Original Article

A winner-take-all Lotka–Volterra recurrent neural network with only one winner in each row and each column

verfasst von: Bochuan Zheng

Erschienen in: Neural Computing and Applications | Ausgabe 7-8/2014

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

A winner-take-all Lotka–Volterra recurrent neural network with N × N neurons is proposed in this paper. Sufficient conditions for existence of winner-take-all stable equilibrium points in the network are obtained. These conditions guarantee that there is one and only one winner in each row and each column at any stable equilibrium point. In addition, rigorous convergence analysis is carried out. It is proven that the proposed network model is convergent. The conditions for the winner-take-all behavior obtained in this paper provide design guidelines for network implementation and fabrication. Simulations are also presented to illustrate the theoretical findings.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Urahama K, Nagao T (1995) k-Winner-take-all circuit with O(N) complexity. IEEE Trans Neural Netw 6:775–778CrossRef Urahama K, Nagao T (1995) k-Winner-take-all circuit with O(N) complexity. IEEE Trans Neural Netw 6:775–778CrossRef
2.
Zurück zum Zitat Wang L (1999) Multi-associative neural networks and their applications to learning and retrieving complex spatio-temporal sequences. IEEE Trans Syst Man Cybern Part B Cybern 29:73–82CrossRef Wang L (1999) Multi-associative neural networks and their applications to learning and retrieving complex spatio-temporal sequences. IEEE Trans Syst Man Cybern Part B Cybern 29:73–82CrossRef
3.
Zurück zum Zitat Wang L (1997) On competitive learning. IEEE Trans Neural Netw 8:1214–1217CrossRef Wang L (1997) On competitive learning. IEEE Trans Neural Netw 8:1214–1217CrossRef
4.
Zurück zum Zitat Dempsey GL, McVey ES (1993) Circuit implementation of a peak detector neural network. IEEE Trans Circuits Syst II Analog Digital Signal Process 40:585–591CrossRef Dempsey GL, McVey ES (1993) Circuit implementation of a peak detector neural network. IEEE Trans Circuits Syst II Analog Digital Signal Process 40:585–591CrossRef
5.
Zurück zum Zitat Seiler G, Nossek JA (1993) Winner-take-all cellular neural networks. IEEE Trans Circuits Syst II Analog Digit Signal Process 40:184–190MathSciNetCrossRefMATH Seiler G, Nossek JA (1993) Winner-take-all cellular neural networks. IEEE Trans Circuits Syst II Analog Digit Signal Process 40:184–190MathSciNetCrossRefMATH
6.
Zurück zum Zitat Andrew LLH (1996) Improving the robustness of winner-take-all cellular neural networks. IEEE Trans Circuits Syst II Analog Digit Signal Process 43:329–334CrossRefMATH Andrew LLH (1996) Improving the robustness of winner-take-all cellular neural networks. IEEE Trans Circuits Syst II Analog Digit Signal Process 43:329–334CrossRefMATH
7.
Zurück zum Zitat Fukai T, Tanaka S (1997) A simple neural network exhibiting selective activation of neuronal ensembles: from winner-take-all to winner-share-all. Neural Comput 9:77–97CrossRefMATH Fukai T, Tanaka S (1997) A simple neural network exhibiting selective activation of neuronal ensembles: from winner-take-all to winner-share-all. Neural Comput 9:77–97CrossRefMATH
8.
Zurück zum Zitat Asai T, Fukai T, Tanaka S (1999) A subthreshold MOS circuit for the Lotka–Volterra neural network producing the winner-take-all solutions. Neural Netw 12:211–216CrossRef Asai T, Fukai T, Tanaka S (1999) A subthreshold MOS circuit for the Lotka–Volterra neural network producing the winner-take-all solutions. Neural Netw 12:211–216CrossRef
9.
Zurück zum Zitat Hahnloser RHR (1998) On the piecewise analysis of networks of linear threshold neurons. Neural Netw 11:691–697CrossRef Hahnloser RHR (1998) On the piecewise analysis of networks of linear threshold neurons. Neural Netw 11:691–697CrossRef
10.
Zurück zum Zitat Tang HJ, Tan KC, Zhang W (2005) Analysis of cyclic dynamics for networks of linear threshold neurons. Neural Comput 17:97–114MathSciNetCrossRefMATH Tang HJ, Tan KC, Zhang W (2005) Analysis of cyclic dynamics for networks of linear threshold neurons. Neural Comput 17:97–114MathSciNetCrossRefMATH
11.
Zurück zum Zitat Qu H, Yi Z, Wang X (2009) A winner-take-all neural networks of N linear threshold neurons without self-excitatory connections. Neural Process Lett 29:143–154CrossRef Qu H, Yi Z, Wang X (2009) A winner-take-all neural networks of N linear threshold neurons without self-excitatory connections. Neural Process Lett 29:143–154CrossRef
12.
Zurück zum Zitat Yi Z, Heng PA, Fung PF (2000) Winner-take-all discrete recurrent neural networks. IEEE Trans Circuits Syst II Analog Digit Signal Process 47:1584–1589CrossRef Yi Z, Heng PA, Fung PF (2000) Winner-take-all discrete recurrent neural networks. IEEE Trans Circuits Syst II Analog Digit Signal Process 47:1584–1589CrossRef
13.
Zurück zum Zitat Wang J (2010) Analysis and design of a k-winners-take-all model with a single state variable and the heaviside step activation function. IEEE Trans Neural Netw 21:1496–1506CrossRef Wang J (2010) Analysis and design of a k-winners-take-all model with a single state variable and the heaviside step activation function. IEEE Trans Neural Netw 21:1496–1506CrossRef
14.
Zurück zum Zitat Liu Q, Dang C, Cao J (2010) A novel recurrent neural network with one neuron and finite-time convergence for k-winners-take-all operation. IEEE Trans Neural Netw 21:1140–1148CrossRef Liu Q, Dang C, Cao J (2010) A novel recurrent neural network with one neuron and finite-time convergence for k-winners-take-all operation. IEEE Trans Neural Netw 21:1140–1148CrossRef
15.
Zurück zum Zitat Yang JF, Chen CM (1997) A dynamic k-winners-take-all neural network. IEEE Trans Syst Man Cybern Part B Cybern 27:523–526CrossRef Yang JF, Chen CM (1997) A dynamic k-winners-take-all neural network. IEEE Trans Syst Man Cybern Part B Cybern 27:523–526CrossRef
16.
Zurück zum Zitat Wersing H, Steil JJ, Ritter H (2001) A competitive layer model for feature binding and sensory segmentation. Neural Comput 13:357–387CrossRefMATH Wersing H, Steil JJ, Ritter H (2001) A competitive layer model for feature binding and sensory segmentation. Neural Comput 13:357–387CrossRefMATH
17.
Zurück zum Zitat Yi Z (2010) Foundations of implementing the competitive layer model by Lotka–Volterra recurrent neural networks. IEEE Trans Neural Netw 21:494–507CrossRef Yi Z (2010) Foundations of implementing the competitive layer model by Lotka–Volterra recurrent neural networks. IEEE Trans Neural Netw 21:494–507CrossRef
18.
Zurück zum Zitat Zheng B, Yi Z (2012) A new method based on the CLM of the LV RNN for brain MR image segmentation. Digit Signal Process 22:497–505MathSciNetCrossRef Zheng B, Yi Z (2012) A new method based on the CLM of the LV RNN for brain MR image segmentation. Digit Signal Process 22:497–505MathSciNetCrossRef
19.
Zurück zum Zitat Yi Z, Tan KK (2004) Convergence analysis of recurrent neural networks. Kluwer, DordrechtCrossRefMATH Yi Z, Tan KK (2004) Convergence analysis of recurrent neural networks. Kluwer, DordrechtCrossRefMATH
Metadaten
Titel
A winner-take-all Lotka–Volterra recurrent neural network with only one winner in each row and each column
verfasst von
Bochuan Zheng
Publikationsdatum
01.06.2014
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 7-8/2014
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-013-1412-0

Weitere Artikel der Ausgabe 7-8/2014

Neural Computing and Applications 7-8/2014 Zur Ausgabe

Premium Partner