2014 | OriginalPaper | Buchkapitel
Tipp
Weitere Kapitel dieses Buchs durch Wischen aufrufen
Erschienen in:
Growing Adaptive Machines
In nature, brains are built through a process of biological development in which many aspects of the network of neurons and connections change and are shaped by external information received through sensory organs. From numerous studies in neuroscience, it has been demonstrated that developmental aspects of the brain are intimately involved in learning. Despite this, most artificial neural network (ANN) models do not include developmental mechanisms and regard learning as the adjustment of connection weights. Incorporating development into ANNs raises fundamental questions. What level of biological plausibility should be employed? In this chapter, we discuss two artificial developmental neural network models with differing degrees of biological plausibility. One takes the view that the neuron is fundamental (neuro-centric) so that all evolved programs are based at the level of the neuron, the other carries out development at an entire network level and evolves rules that change the network (holocentric). In the process, we hope to reveal some important issues and questions that are relevant to researchers wishing to create other such models.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
Anzeige
1.
W.S. McCulloch, W. Pitts, A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophy.
5, 115–133 (1943)
2.
E.R. Kandel, J.H. Schwartz, T.M. Jessell,
Principles of Neural Science, 4th edn. (McGraw-Hill, New York, 2000)
3.
R.M. French, Catastrophic forgetting in connectionist networks: causes, consequences and solutions. Trends Cogn. Sci.
3(4), 128–135 (1999)
CrossRefMathSciNet
4.
M. McCloskey, N.J. Cohen, Catastrophic interference in connectionist networks: the sequential learning problem. Psychol. Learn. Motiv.
24, 109–165 (1989)
CrossRef
5.
R. Ratcliff, Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. Psychol. Rev.
97, 285–308 (1990)
CrossRef
6.
S. Judd, On the complexity of loading shallow neural networks. J. Complex.
4, 177–192 (1988)
CrossRefMATHMathSciNet
7.
E.B. Baum, A proposal for more powerful learning algorithms. Neural Comput.
1, 201–207 (1989)
CrossRef
8.
S.E. Fahlman, C. Lebiere,
The cascade-correlation architecture, ed. by D.S. Touretzky. Advances in Neural Information Processing Systems (Morgan Kaufmann, San Mateo, 1990)
9.
M. Frean, The upstart algorithm: a method for constructing and training feedforward neural networks. Neural Comput.
2, 198–209 (1990)
CrossRef
10.
P.T. Quinlan, Structural change and development in real and artificial networks. Neural Netw.
11, 577–599 (1998)
CrossRef
11.
P.T. Quinlan (ed.),
Connectionist Models of Development (Psychology Press, New York, 2003)
12.
J.F. Miller, G.M. Khan, Where is the brain inside the brain? on why artificial neural networks should be developmental. Memet. Comput.
3(3), 217–228 (2011)
CrossRef
13.
J.R. Smythies,
The Dynamic Neuron (MIT Press, Cambridge, 2002)
14.
F. Valverde, Rate and extent of recovery from dark rearing in the visual cortex of the mouse. Brain Res.
33, 1–11 (1971)
CrossRef
15.
J.A. Kleim, E. Lussnig, E.R. Schwartz, T.A. Comery, W.T. Greenough, Synaptogenesis and fos expression in the motor cortex of the adult rat after motor skill learning. J. Neurosci
16, 4529–4535 (1996)
16.
J.A. Kleim, K. Vij, D.H. Ballard, W.T. Greenough, Learning-dependent synaptic modifications in the cerebellar cortex of the adult rat persist for at least four weeks. J. Neurosci
17, 717–721 (1997)
17.
M.L. Mustroph, S. Chen, S.C. Desai, E.B. Cay, E.K. Deyoung, J.S. Rhodes. Aerobic exercise is the critical variable in an enriched environment that increases hippocampal neurogenesis and water maze learning in male C57BL/6J mice. Neuroscience (2012), Epub ahead of print
18.
A.D. Tramontin, E. Brenowitz, Seasonal plasticity in the adult brain. Trends Neurosci.
23, 251–258 (2000)
CrossRef
19.
E.A. Maguire, D.G. Gadian, I.S. Johnsrude, C.D. Good, J. Ashburner, R.S.J. Frackowiak, C.D. Frith, Navigation-related structural change in the hippocampi of taxi drivers. PNAS
97, 4398–4403 (2000)
CrossRef
20.
S. Rose,
The Making of Memory: From Molecules to Mind (Vintage, London, 2003)
21.
A.S. Dekaban, D. Sadowsky, Changes in brain weights during the span of human life. Ann. Neurol.
4, 345–356 (1978)
CrossRef
22.
G.M. Khan, J.F. Miller,
Evolution of cartesian genetic programs capable of learning, ed. by F. Rothlauf. Conference on Genetic and Evolutionary Computation (GECCO) (ACM, 2009) pp. 707–714
23.
G.M. Khan, J.F. Miller, D.M. Halliday,
Coevolution of intelligent agents using Cartesian genetic programming. Conference on Genetic and Evolutionary Computation (GECCO) (2007), pp. 269–276
24.
G.M. Khan, J.F. Miller, D.M. Halliday,
Breaking the synaptic dogma: Evolving a neuro-inspired developmental network, ed. by X. Li, M. Kirley, M. Zhang, D.G. Green, V. Ciesielski, H.A. Abbass, Z. Michalewicz, T. Hendtlass, K. Deb, K.C. Tan, J. Branke, Y. Shi. Simulated Evolution and Learning, 7th International Conference, SEAL 2008, Melbourne, Australia, December 7–10, 2008. Proceedings, volume 5361 of Lecture Notes in Computer Science (Springer, 2008), pp. 11–20
25.
G.M. Khan, J.F. Miller, D.M. Halliday,
Coevolution of neuro-developmental programs that play checkers, ed. by G. Hornby, L. Sekanina, P.C. Haddow. Evolvable Systems: From Biology to Hardware, 8th International Conference, ICES 2008, Prague, Czech Republic, September 21–24, 2008. Proceedings, volume 5216 of Lecture Notes in Computer Science (Springer, 2008), pp. 352–361
26.
G.M. Khan, J.F. Miller, D.M. Halliday,
Developing neural structure of two agents that play checkers using cartesian genetic programming, ed. by C. Ryan, M. Keijzer. Conference on Genetic and Evolutionary Computation (GECCO) Companion Material (ACM, 2008), pp. 2169–2174
27.
G.M. Khan, J.F. Miller, D.M. Halliday, In search of intelligent genes: the cartesian genetic programming computational neuron (cgpcn), in
Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2009, Trondheim, Norway, 18–21 May, 2009 (IEEE, 2009), pp. 574–581
28.
G.M. Khan, J.F. Miller, D.M. Halliday, Evolution of cartesian genetic programs for development of learning neural architecture. Evol. Comput.
19(3), 469–523 (2011)
CrossRef
29.
K.O. Stanley, D.B. D’Ambrosio, J. Gauci, A hypercube-based encoding for evolving large-scale neural networks. Artif. Life
15, 185–212 (2009)
CrossRef
30.
F. Gruau, Automatic definition of modular neural networks. Adapt. Behav.
3, 151–183 (1994)
CrossRef
31.
J.R. Koza,
Genetic Programming: On the Programming of Computers by Natural Selection (MIT Press, Cambridge, 1992)
MATH
32.
J.F. Miller, An
Empirical Study of the Efficiency of Learning Boolean Functions using a Cartesian Genetic Programming Approach. Conference on Genetic and Evolutionary Computation (GECCO) (Morgan Kaufmann, 1999), pp. 1135–1142
33.
J.F. Miller, P. Thomson, Cartesian Genetic Programming, in
Proceedings of the European Conference on Genetic Programming, vol. 1802 of LNCS (Springer, 2000), pp. 121–132
34.
S. Harding, J.F. Miller, W. Banzhaf,
A survey of self modifying CGP, ed. by R. Riolo, T. McConaghy, E. Vladislavleda. Genetic Programming Theory and Practice VIII, 2010 (Springer, 2010) pp. 91–107
35.
S. Harding, J.F. Miller, W. Banzhaf, Self-modifying Cartesian Genetic Programming, in
Proceedings of the Genetic and Evolutionary Computation Conference (2007), pp. 1021–1028
36.
S. Harding, J.F. Miller, W. Banzhaf,
Evolution, development and learning using self-modifying cartesian genetic programming, ed. by F. Rothlauf. Conference on Genetic and Evolutionary Computation (GECCO) (ACM, 2009), pp. 699–706
37.
S. Harding, J.F. Miller, W. Banzhaf,
Self modifying cartesian genetic programming: Fibonacci, squares, regression and summing, ed. by L. Vanneschi, S. Gustafson, A. Moraglio, I. De Falco, M. Ebner. Genetic Programming, 12th European Conference, EuroGP 2009, Tübingen, Germany, April 15–17, 2009, Proceedings, volume 5481 of Lecture Notes in Computer Science (Springer, 2009), pp. 133–144
38.
S. Harding, J.F. Miller, W. Banzhaf, Self modifying cartesian genetic programming: Parity, in
Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2009, Trondheim, Norway, 18–21 May, IEEE, 2009 (2009), pp. 285–292
39.
S. Harding, J.F. Miller, W. Banzhaf, Developments in cartesian genetic programming: self-modifying cgp. Genet. Program. Evolvable Mach.
11(3–4), 397–439 (2010)
CrossRef
40.
S. Harding, J.F. Miller, W. Banzhaf,
Self modifying cartesian genetic programming: finding algorithms that calculate pi and e to arbitrary precision, ed. by M. Pelikan, J. Branke. Conference on Genetic and Evolutionary Computation (GECCO) (ACM, 2010), pp. 579–586
41.
M.M. Khan, G.M. Khan, J.F. Miller, Efficient representation of recurrent neural networks for Markovian/Non-Markovian non-linear control problems, ed. by A.E. Hassanien, A. Abraham, F. Marcelloni, H. Hagras, M. Antonelli, T.-P. Hong, in
Proceedings of the International Conference on Intelligent Systems Design and Applications (IEEE, 2010), pp. 615–620
42.
M.M. Khan, G.M. Khan, J.F. Miller, Evolution of neural networks using cartesian genetic programming, in
Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2010, Barcelona, Spain, 18–23 July 2010 (IEEE, 2010)
43.
M.M Khan, G.M. Khan, J.F. Miller, Evolution of optimal anns for non-linear control problems using cartesian genetic programming, ed. by H.R. Arabnia, D. de la Fuente, E.B. Kozerenko, J.A. Olivas, R. Chang, P.M. LaMonica, R.A. Liuzzi, A.M.G. Solo, in
Proceedings of the 2010 International Conference on Artificial Intelligence, ICAI 2010, July 12–15, 2010, Las Vegas Nevada, USA, vol. 2 (CSREA Press, 2010), pp. 339–346
44.
J.A. Walker, J.F. Miller, The automatic acquisition, evolution and reuse of modules in cartesian genetic programming. IEEE Trans. Evolut. Comput.
12(4), 397–417 (2008)
CrossRef
45.
J.F. Miller (ed.),
Cartesian Genetic Programming. Natural Computing Series (Springer, Berlin, 2011)
46.
I. Rechenberg, Evolutionsstrategie - Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. Ph.D. thesis, Technical University of Berlin, Germany, (1971)
47.
J.F. Miller, S.L. Smith, Redundancy and computational efficiency in cartesian genetic programming. IEEE Trans. Evolut. Comput.
10(2), 167–174 (2006)
CrossRef
48.
V.K. Vassilev, J.F. Miller,
The Advantages of Landscape Neutrality in Digital Circuit Evolution. International Conference on Evolvable Systems, vol. 1801 of LNCS (Springer, 2000), pp. 252–263
49.
T. Yu, J.F. Miller, Neutrality and the evolvability of Boolean function landscape, in
Proceedings of the European Conference on Genetic Programming, vol. 2038 of LNCS (Springer, 2001), pp. 204–217
50.
G.M. Khan, J.F Miller, Solving mazes using an artificial developmental neuron, in
Proceedings of the Conference on Artificial Life (ALIFE) XII (MIT Press, 2010), pp. 241–248
51.
S. Harding, J.F. Miller, W. Banzhaf,
SMCGP2: Self-modifying Cartesian Genetic Programming in Two Dimensions. Conference on Genetic and Evolutionary Computation (GECCO) (ACM, 2011), pp. 1491–1498
52.
W. Gerstner, W.M. Kistler,
Spiking Neuron Models (Cambridge University Press, Cambridge, 2002)
- Titel
- Neuro-Centric and Holocentric Approaches to the Evolution of Developmental Neural Networks
- DOI
- https://doi.org/10.1007/978-3-642-55337-0_8
- Autor:
-
Julian F. Miller
- Verlag
- Springer Berlin Heidelberg
- Sequenznummer
- 8
- Kapitelnummer
- Chapter 8