Topology and computational performance of attractor neural networks

Patrick N. McGraw and Michael Menzinger
Phys. Rev. E 68, 047102 – Published 10 October 2003
PDFExport Citation

Abstract

To explore the relation between network structure and function, we studied the computational performance of Hopfield-type attractor neural nets with regular lattice, random, small-world, and scale-free topologies. The random configuration is the most efficient for storage and retrieval of patterns by the network as a whole. However, in the scale-free case retrieval errors are not distributed uniformly among the nodes. The portion of a pattern encoded by the subset of highly connected nodes is more robust and efficiently recognized than the rest of the pattern. The scale-free network thus achieves a very strong partial recognition. The implications of these findings for brain function and social dynamics are suggestive.

  • Received 1 August 2003

DOI:https://doi.org/10.1103/PhysRevE.68.047102

©2003 American Physical Society

Authors & Affiliations

Patrick N. McGraw and Michael Menzinger*

  • Department of Chemistry, University of Toronto, Toronto, Ontario, Canada M5S 3H6

  • *Electronic address: pmcgraw@chem.utoronto.ca

References (Subscription Required)

Click to Expand
Issue

Vol. 68, Iss. 4 — October 2003

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review E

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×