Learning feature constraints in a chaotic neural memory

Shigetoshi Nara and Peter Davis
Phys. Rev. E 55, 826 – Published 1 January 1997
PDFExport Citation

Abstract

We consider a neural network memory model that has both nonchaotic and chaotic regimes. The chaotic regime occurs for reduced neural connectivity. We show that it is possible to adapt the dynamics in the chaotic regime, by reinforcement learning, to learn multiple constraints on feature subsets. This results in chaotic pattern generation that is biased to generate the feature patterns that have received responses. Depending on the connectivity, there can be additional memory pulling effects, due to the correlations between the constrained neurons in the feature subsets and the other neurons.

  • Received 26 April 1996

DOI:https://doi.org/10.1103/PhysRevE.55.826

©1997 American Physical Society

Authors & Affiliations

Shigetoshi Nara1 and Peter Davis2

  • 1The Division of Mathematical and Information Sciences, Faculty of Integrated Arts and Sciences, Hiroshima University, Kagamiyama 1-7-1, Higashi-Hiroshima 739, Japan
  • 2ATR Adaptive Communications Research Laboratories, 2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, Japan

References (Subscription Required)

Click to Expand
Issue

Vol. 55, Iss. 1 — January 1997

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review E

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×