Skip to main content

Elements of Cognitive Systems Theory

  • Chapter
  • First Online:
Complex and Adaptive Dynamical Systems
  • 945 Accesses

Abstract

The brain is without doubt the most complex adaptive system known to humanity, arguably also a complex system about which we know very little.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See, e.g., Abeles et al. (1995) and Kenet et al. (2003).

  2. 2.

    Humans can distinguish cognitively about 10–12 objects per second.

  3. 3.

    See Edelman and Tononi (2000).

  4. 4.

    See Dehaene and Naccache(2003) and Baars and Franklin (2003).

  5. 5.

    See Crick and Koch (2003).

  6. 6.

    Note that neuromodulators are typically released in the intercellular medium from where they physically diffuse towards the surrounding neurons.

  7. 7.

    This is a standard result for so-called Hopfield neural networks, see e.g. Ballard (2000).

  8. 8.

    A neural network is denoted “recurrent” when loops dominate the network topology.

  9. 9.

    For a mathematically precise definition, a memory is termed fading when forgetting is scale-invariant, viz having a power law functional time dependence.

  10. 10.

    From the point of view of dynamical systems theory effective freedom of action is conceivable in connection to a true dynamical phase transition, like the ones discussed in the Chap. 4 possibly occurring in a high-level cognitive system. Whether dynamical phase transitions are of relevance for the brain of mammals, e.g. in relation to the phenomenon of consciousness, is a central and yet unresolved issue.

  11. 11.

    We note that general n-point interactions could be generated additionally when eliminating the interneurons. “n-point interactions” are terms entering the time evolution of dynamical systems depending on \((n-1)\) variables. Normal synaptic interactions are 2-point interactions, as they involve two neurons, the presynaptic and the postsynaptic neuron. When integrating out a degree of freedom, like the activity of the interneurons, n-point interactions are generated generally. The postsynaptic neuron is then influenced only when \((n-1)\) presynaptic neurons are active simultaneously. n-point interactions are normally not considered in neural networks theory. They complicate the analysis of the network dynamics considerably.

  12. 12.

    Here we use the term “transient attractor” as synonymous with “attractor ruin”, an alternative terminology from dynamical system theory.

  13. 13.

    A possible mathematical implementation for the reservoir functions, with \(\alpha={\textrm{w}},z\), is \( f_\alpha(\varphi)\ =\ f_\alpha^{(\min)} \,+\, \left(1-f_\alpha^{(\min)}\right) \frac{ \textrm{atan}[(\varphi-\varphi_c^{(\alpha)})/\varGamma_\varphi] - \textrm{atan}[(0-\varphi_c^{(\alpha)})/\varGamma_\varphi]}{\textrm{atan}[(1-\varphi_c^{(\alpha)})/\varGamma_\varphi] - \textrm{atan}[(0-\varphi_c^{(\alpha)})/\varGamma_\varphi] } \). Suitable values are \(\varphi_c^{(z)}=0.15\), \(\varphi_c^{({\textrm{w}})}=0.7\) \(\varGamma_\varphi=0.05\), \(f_{\textrm{w}}^{(\min)}=0.1\) and \(f_z^{(\min)}=0\).

  14. 14.

    A Kohonen network is an example of a neural classifier via one-winner-takes-all architecture, see e.g. Ballard (2000).

Further Reading

  • For a general introduction to the field of artificial intelligence (AI), see Russell (1995). For a handbook on experimental and theoretical neuroscience, see Arbib (2002). For exemplary textbooks on neuroscience, see Dayan (2001) and for an introduction to neural networks, see Ballard (2000).

    Google Scholar 

  • Somewhat more specialized books for further reading regarding the modeling of cognitive processes by small neural networks is that by McLeod et al. (1998) and on computational neuroscience that by O’Reilly (2000).

    Google Scholar 

  • For some relevant review articles on dynamical modeling in neuroscience the following are recommended: Rabinovich et al. (2006); on reinforcement learning Kaelbling et al. (1996), and on learning and memory storage in neural nets Carpenter (2001).

    Google Scholar 

  • We also recommend to the interested reader to go back to some selected original literature dealing with “simple recurrent networks” in the context of grammar acquisition (Elman, 990, 2004), with neural networks for time series prediction tasks (Dorffner, 1996), with “learning by error” (Chialvo and Bak, 1999), with the assignment of the cognitive tasks discussed in Sect. 8.3.1 to specific mammal brain areas (Doya, 1999), with the effect on memory storage capacity of various Hebbian-type learning rules (Chechik et al., 2001), with the concept of “associative thought processes” (Gros, 2007, 2009a) and with “diffusive emotional control” (Gros, 2009b).

    Google Scholar 

  • It is very illuminating to take a look at the freely available databases storing human associative knowledge (Nelson et al., 1998; Liu, 2004).

    Google Scholar 

  • Abeles M. et al. 1995 Cortical activity flips among quasi-stationary states. Proceedings of the National Academy of Science, USA 92, 8616–8620.

    Article  ADS  Google Scholar 

  • Arbib, M.A. 2002 The Handbook of Brain Theory and Neural Networks. MIT Press, Cambridge, MA.

    Google Scholar 

  • Baars, B.J., Franklin, S. 2003 How conscious experience and working memory interact. Trends in Cognitive Science 7, 166–172.

    Article  Google Scholar 

  • Ballard, D.H. 2000 An Introduction to Natural Computation. MIT Press, Cambridge, MA.

    Google Scholar 

  • Carpenter, G.A. 2001 Neural-network models of learning and memory: Leading questions and an emerging framework. Trends in Cognitive Science 5, 114–118.

    Article  Google Scholar 

  • Chechik, G., Meilijson, I., Ruppin, E. 2001 Effective neuronal learning with ineffective Hebbian learning rules. Neural Computation 13, 817.

    Article  MATH  Google Scholar 

  • Chialvo, D.R., Bak, P. 1999 Learning from mistakes. Neuroscience 90, 1137–1148.

    Article  Google Scholar 

  • Crick, F.C., Koch, C. 2003 A framework for consciousness. Nature Neuroscience 6, 119–126.

    Article  Google Scholar 

  • Dayan, P., Abbott, L.F. 2001 Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. MIT Press, Cambridge, MA.

    Google Scholar 

  • Dehaene, S., Naccache, L. 2003 Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework. Cognition 79, 1–37.

    Article  Google Scholar 

  • Dorffner, G. 1996 Neural networks for time series processing. Neural Network World 6, 447–468.

    Google Scholar 

  • Doya, K. 1999 What are the computations of the cerebellum, the basal ganglia and the cerebral cortex? Neural Networks 12, 961–974.

    Article  Google Scholar 

  • Edelman, G.M., Tononi, G.A. 2000 A Universe of Consciousness. Basic Books, New York.

    Google Scholar 

  • Elman, J.L. 1990 Finding structure in time. Cognitive Science 14, 179–211.

    Article  Google Scholar 

  • Elman, J.L. 2004 An alternative view of the mental lexicon. Trends in Cognitive Sciences 8, 301–306.

    Article  Google Scholar 

  • Gros, C. 2007 Neural networks with transient state dynamics. New Journal of Physics 9, 109.

    Article  MathSciNet  ADS  Google Scholar 

  • Gros, C. 2009a Cognitive computation with autonomously active neural networks: An emerging field. Cognitive Computation 1, 77.

    Article  ADS  Google Scholar 

  • Gros, C. 2009b Emotions, diffusive emotional control and the motivational problem for autonomouscognitive systems. In: Vallverdu, J., Casacuberta, D. (eds.), Handbook of Research on Synthetic Emotionsand Sociable Robotics: New Applications in Affective Computing and Artificial Intelligence. IGI-Global Hershey, NJ

    Google Scholar 

  • Kaelbling, L.P., Littman, M.L., Moore, A. 1996 Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4, 237–285.

    Google Scholar 

  • Kenet, T., Bibitchkov, D., Tsodyks, M., Grinvald, A., Arieli, A. 2003 Spontaneously emerging cortical representations of visual attributes. Nature 425, 954–956.

    Article  ADS  Google Scholar 

  • Liu, H., Singh, P. 2004 ConcepNet a practical commonsense reasoning tool-kit. BT Technology Journal 22, 211–226.

    Article  Google Scholar 

  • McLeod, P., Plunkett, K., Rolls, E.T. 1998 Introduction to Connectionist Modelling. Oxford University Press, New York.

    Google Scholar 

  • Nelson, D.L., McEvoy, C.L., Schreiber, T.A. 1998 The University of South Florida Word Association, Rhyme, and Word Fragment Norms. Homepage: http://www.usf.edu/FreeAssociation.

  • O’Reilly, R.C., Munakata, Y. 2000 Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain. MIT Press, Cambridge.

    Google Scholar 

  • Rabinovich, M.I., Varona, P., Selverston, A.I., Abarbanel, H.D.I. 2006 Dynamical principles in neuroscience. Review of Modern Physics 78, 1213–1256.

    Article  ADS  Google Scholar 

  • Russell, S.J., Norvig, P. 1995 Artificial Intelligence: A Modern Approach. Prentice-Hall, Englewood Cliffs, NJ.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Claudius Gros .

Exercises

Exercises

8.1.1 Transient State Dynamics

Consider a system containing two variables, \(x,\varphi\in[0,1]\). Invent a system of coupled differential equations for which \(x(t)\) has two transient states, \(x\approx1\) and \(x\approx0\). One possibility is to consider ϕ as a reservoir and to let \(x(t)\) autoexcite/autodeplete itself when the reservoir is high/low.

The transient state dynamics should be rigorous. Write a code implementing the differential equations.

8.1.2 The Diffusive Control Unit

Given are two signals \(y_1(t)\in[0,\infty]\) and \(y_2(t)\in[0,\infty]\). Invent a system of differential equations for variables \(x_1(t)\in[0,1]\) and \(x_2(t)\in[0,1]\) driven by the \(y_{1,2}(t)\) such that \(x_1\to1\) and \(x_2\to0\) when \(y_1>y_2\) and vice versa. Note that the \(y_{1,2}\) are not necessarily normalized.

8.1.3 Leaky Integrator Neurons

Consider a two-site network of neurons, having membrane potentials x i and activities \(y_i\in[-1,1]\), the so-called “leaky integrator” model for neurons,

$$\dot x_1 \ =\ -\varGamma x_1 - {\textrm{w}} y_2, \qquad \dot x_2 \ =\ -\varGamma x_2 + {\textrm{w}} y_1, \qquad y_i \ =\ \frac{2}{e^{-x_i} +1}-1 ,$$

with \(\varGamma>0\) being the decay rate. The coupling \({\textrm{w}}>0\) links neuron one (two) excitatorily (inhibitorily) to neuron two (one). Which are the fixpoints and for which parameters can one observe weakly damped oscillations?

8.1.4 Associative Overlaps and Thought Processes

Consider the seven-site network of Fig. 8.6. Evaluate all pairwise associative overlaps of order zero and of order one between the five cliques, using Eqs. (8.4) and (8.5). Generate an associative thought process of cliques \(\alpha_1,\ \alpha_2,\ldots\), where a new clique \(\alpha_{t+1}\) is selected using the following simplified dynamics:

  1. (1)

    \({\alpha_{t+1}}\) has an associative overlap of order zero with \({\alpha_{t}}\) and is distinct from \({\alpha_{t-1}}\).

  2. (2)

    If more than one clique satisfies criterium (1), then the clique with the highest associative overlap of order zero with \(\alpha_{t}\) is selected.

  3. (3)

    If more than one clique satisfies criteria (1)–(2), then one of them is drawn randomly.

Discuss the relation to the dHAN model treated in Sect. 8.4.2.

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Gros, C. (2011). Elements of Cognitive Systems Theory. In: Complex and Adaptive Dynamical Systems. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04706-0_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04706-0_8

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04705-3

  • Online ISBN: 978-3-642-04706-0

  • eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)

Publish with us

Policies and ethics