Skip to main content
Log in

MENNAG: a modular, regular and hierarchical encoding for neural-networks based on attribute grammars

  • Research Paper
  • Published:
Evolutionary Intelligence Aims and scope Submit manuscript

Abstract

Recent work in the evolutionary computation field suggests that the implementation of the principles of modularity (functional localization of functions), repetition (multiple use of the same sub-structure) and hierarchy (recursive composition of sub-structures) could improve the evolvability of complex systems. The generation of neural networks through evolutionary algorithms should in particular benefit from an adapted use of these notions. We have consequently developed modular encoding for neural networks based on attribute grammars (MENNAG), a new encoding designed to generate the structure of neural networks and parameters with evolutionary algorithms, while explicitly enabling these three above-mentioned principles. We expressed this encoding in the formalism of attribute grammars in order to facilitate understanding and future modifications. It has been tested on two preliminary benchmark problems: cart-pole control and robotic arm control, the latter being specifically designed to evaluate the repetition capabilities of an encoding. We compared MENNAG to a direct encoding, ModNet, NEAT, a multi-layer perceptron with a fixed structure and to reference controllers. Results show that MENNAG performs better than comparable encodings on both problems, suggesting a promising potential for future applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

Notes

  1. http://sferes.lip6.fr.

  2. For each DIV instruction, “0” is concatenated to its identifier to obtain that of its left son and “1” for its right one.

  3. To obtain this behavior, the identifier of the DIV is passed down to the CONN symbol. The DIV identifier is then concatenated to the evolved bit-strings representing source and target neurons, thus guaranteeing that these connected neurons belong to one the two sub-trees of the DIV.

  4. the first connection to input is connected to the first input, etc.

  5. Neuron output belongs to the [−1, 1] interval and is mapped to a force ranging between −10 and 10 N.

  6. http://www.ode.org.

References

  1. Anderson JR (2007) How can the human mind occur in the physical Universe? In: The modular organization of the mind. Oxford University Press, New York, pp 45–91

  2. de Beeck HPO, Haushofer J, Kanwisher NG, et al (2008) Interpreting fMRI data: maps, modules and dimensions. Nat Rev Neurosci 9:123–135

    Article  Google Scholar 

  3. Boers JWE, Kuiper H, Happel BLM, Springhuizen-Kuiper IG (1993) Designing modular artificial neural networks. Technical report, Rijksuniversiteit te Leiden

  4. Bowers CP, Bullinaria JA (2005) Embryological modelling of the evolution of neural architecture. In: A Cangelosi GB, Borisyuk R (eds) Modeling language, cognition and action. World Scientific, Singapore, pp 375–384

  5. Buessler JL, Urban JP (2002) Biologically inspired robot behavior engineering. In: Modular neural architectures for robotics. Springer, Heidelberg

  6. Bullinaria JA (2007) Understanding the emergence of modularity in neural systems. Cogn Sci 31(4):673–695

    Google Scholar 

  7. Cangelosi A, Parisi D, Nolfi S (1994) Cell division and migration in a ‘genotype’ for neural networks. Netw Comput Neural Syst 5(4):497–515

    Article  MATH  Google Scholar 

  8. Cantu-Paz E, Kamath C (2005) An empirical comparison of combinations of evolutionary algorithms and neural networks for classification problems. IEEE Trans Syst Man Cybern Part B 35(5):915–927

    Article  Google Scholar 

  9. Cliff D, Harvey I, Husbands P (1992) Incremental evolution of neural network architectures for adaptive behaviour. Technical report. Cognitive science research paper CSRP256, Brighton BN1 9QH, England, UK

  10. D’Ambrosio D, Stanley K (2007) A novel generative encoding for exploiting neural network sensor and output geometry. In: Proceedings of the 9th annual conference on genetic and evolutionary computation, pp 974–981

  11. Deb K (2001) Multi-objectives optimization using evolutionnary algorithms. Wiley, New York

  12. Deb K, Goldberg DE (1989) An investigation of niche and species formation in genetic function optimization. In: Proceedings of the third international conference on genetic algorithms. Morgan Kaufmann Publishers, San Francisco, pp 42–50

  13. Doncieux S (2003) Évolution de contrôleurs neuronaux pour animats volants : méthodologie et applications. PhD thesis, LIP6/AnimatLab, Université Pierre et Marie Curie, Paris, France

  14. Doncieux S, Meyer JA (2004a) Evolution of neurocontrollers for complex systems: alternatives to the incremental approach. In: Proceedings of the international conference on artificial intelligence and applications (AIA 2004)

  15. Doncieux S, Meyer JA (2004b) Evolving modular neural networks to solve challenging control problems. In: Proceedings of the fourth international ICSC symposium on engineering of intelligent systems (EIS 2004)

  16. Doncieux S, Meyer JA (2005) Evolving PID-like neurocontrollers for non-linear control problems. Int J Control Intell Syst (IJCIS) Spec Issue Nonlinear Adapt PID Control 33(1):55–62

    Google Scholar 

  17. Fukutani I, Vaario J (1997) The effect of environment to genetic growth. In: International symposium on system life, July 21–22, 1997, Tokyo, Japan, pp 227–232

  18. Gallinari P (1998) Modular neural net systems, training of the handbook of brain theory and neural networks table of contents, pp 582–585

  19. García-Arnau M, Manrique D, Ríos J, Rodríguez-Patón A (2007) Initialization method for grammar-guided genetic programming. Knowl Based Syst 20(2):127–133

    Article  Google Scholar 

  20. Gauci J, Stanley K (2007) Generating large-scale neural networks through discovering geometric regularities. In: Proceedings of the 9th annual conference on Genetic and evolutionary computation, pp 997–1004

  21. Geva S, Sitte J (1993) A cartpole experiment benchmark for trainable controllers. Control Systems Mag IEEE 13(5):40–51

    Article  Google Scholar 

  22. Gomez FJ, Schmidhuber J, Miikkulainen R (2006) Efficient non-linear control through neuroevolution. In: Proceedings of the European conference on machine learning (ECML-06, Berlin), pp 654–662

  23. Grönroos M (1999) A comparison of some methods for evolving neural networks. In: Proceedings of the genetic and evolutionary computation conference, vol 2. Morgan Kaufmann, Menlo Park, p 1442

  24. Gruau F (1995) Automatic definition of modular neural networks. Adapt Behav 3(2):151–183

    Article  Google Scholar 

  25. Gruau F, Whitley D, Pyeatt L (1996) A comparison between cellular encoding and direct encoding for genetic neural networks. In: John Koza R, David Goldberg E, David Fogel B, Rick Riolo L (eds) Genetic programming 1996: proceedings of the first annual conference. MIT Press, Stanford University, CA, pp 81–89

  26. Hartwell L, Hopfield J, Leibler S, Murray A (1999) From molecular to modular cell biology. Nature 402(6761):C47–C52

    Article  Google Scholar 

  27. Holland JH (1975) Adaptation in natural and artificial systems. University of Michigan Press, MI

    Google Scholar 

  28. Hornby G (2005) Measuring, enabling and comparing modularity, regularity and hierarchy in evolutionary design. In: Proceedings of the 2005 conference on genetic and evolutionary computation. ACM Press, New York, pp 1729–1736

  29. Hornby G, Pollack J (2002) Creating high-level components with a generative representation for body–brain evolution. Artif Life 8(3):223–246

    Article  Google Scholar 

  30. Huettel SA, Song AW, McCarthy G (2004) Functional magnetic resonance imaging. Sinauer Associates, Sunderland

    Google Scholar 

  31. Hussain T (2003) Attribute grammar encoding of the structure and behaviour of artificial neural networks. PhD thesis, Queen’s University

  32. Igel C (2003) Neuroevolution for reinforcement learning using evolution strategies. In: The 2003 congress on evolutionary computation, CEC’03, vol 4. IEEE Press, Ne wYork, pp 2588–2595

  33. de Jong E, Thierens D (2004) Exploiting modularity, hierarchy, and repetition in variable-length problems. In: Proceedings of the genetic and evolutionary computation conference, GECCO-04. Springer, Heidelberg, pp 1030–1041

  34. Kashtan N, Alon U (2005) Spontaneous evolution of modularity and network motifs. Proc Natl Acad Sci 102(39):13,773–13,778

    Article  Google Scholar 

  35. Kitano H (1990) Designing neural networks using genetic algorithms with graph generation system. Complex Syst 4:461–476

    MATH  Google Scholar 

  36. Knuth D (1968) Semantics of context-free languages. Theory Comput Syst 2(2):127–145

    MATH  MathSciNet  Google Scholar 

  37. Kodjabachian J, Meyer JA (1995) Evolution and development of control architectures in animats. Robot Auton Syst 16:161–182

    Article  Google Scholar 

  38. Kodjabachian J, Meyer JA (1997) Evolution and development of neural networks controlling locomotion, gradient-following, and obstacle-avoidance in artificial insects. IEEE Trans Neural Netw 9:796–812

    Article  Google Scholar 

  39. Koza JR (1992) Genetic programming: on the programming of computers by means of natural selection. MIT Press, Cambridge

  40. Landau S, Doncieux S, Drogoul A, Meyer JA (2002) SFERES: un framework pour la conception de systèmes multi-agents adaptatifs. Technique et Science Informatiques 21(4):427–446

    Google Scholar 

  41. Lindenmayer A (1968) Mathematical models for cellular interaction in development, parts i and ii. Journal of theoretical biology 18(18):280–315

    Article  Google Scholar 

  42. Lipson H (2004) Principles of Modularity, Regularity, and Hierarchy for Scalable Systems. In: Genetic and evolutionary computation conference (GECCO’04) workshop on modularity, regularity and hierarchy

  43. Lipson H, Pollack JB, Suh NP (2002) On the origin of modular variation. Evolution 56(8):1549–1556

    Google Scholar 

  44. Luke S, Spector L (1996) Evolving graphs and networks with edge encoding: preliminary report. In: Late breaking papers at the genetic programming 1996 conference, pp 117–124

  45. Mattiussi C, Floreano D (2004) Evolution of analog networks using local string alignment on highly reorganizable genomes. In: Evolvable hardware, 2004. Proceedings of conference on NASA/DoD 2004, pp 30–37

  46. Mattiussi C, Floreano D (2007) Analog genetic encoding for the evolution of circuits and networks. IEEE Trans Evol Comput 11:596–607

    Article  Google Scholar 

  47. Michel O, Clergue M, Collard P (1997) Artificial neurogenesis: Applications to the cart-pole problem and to an autonomous mobile robot. International Journal on Artificial Intelligence Tools 6(4):613–634

    Article  Google Scholar 

  48. Miller GF, Todd PM, Hedge SU (1989) Designing neural networks using genetic algorithms. In: Proceedings of the third international conference on artificial intelligence. Morgan Kaufmann, Menlo Park, pp 762–767

  49. Mouret JB, Doncieux S, Meyer JA (2006) Incremental evolution of target-following neuro-controllers for flapping-wing animats. In: Nolfi S, Baldassare G, Calabretta R, Hallam J, Marocco D, Meyer JA, Miglino O, Parisi D (eds) From animals to animats: proceedings of the 9th international conference on the simulation of adaptive behavior (SAB), Rome, Italy, pp 606–618

  50. Nolfi S, Parisi D (1998) “Genotypes” for neural networks. The handbook of brain theory and neural networks, pp 431–434

  51. Paakki J (1995) Attribute grammar paradigms: a high-level methodology in language implementation. ACM Comput Surv (CSUR) 27(2):196–255

    Article  Google Scholar 

  52. Pasemann F, Dieckmann U (1997) Evolved neurocontrollers for pole-balancing. Biol Artif Comput Neurosci Technol Proc IWANN 97:1279–1287

    Article  Google Scholar 

  53. Reisinger J, Miikkulainen R (2007) Acquiring evolvability through adaptive representations. In: Proceedings of the 9th annual conference on Genetic and evolutionary computation. ACM Press, New York, pp 1045–1052

  54. Reisinger J, Stanley K, Miikkulainen R (2004) Evolving reusable neural modules. In: Proceedings of the genetic and evolutionary computation conference (GECCO-2004). Springer, Heidelberg, pp 69–81

  55. Siddiqi A (1998) Comparison of matrix rewriting versus direct encoding for evolving neural networks. In: The 1998 IEEE international conference on evolutionary computation, ICEC’98, pp 392–397

  56. Simon H (1962) The architecture of complexity. Proc Am Philos Soc 106(6):467–482

    Google Scholar 

  57. Sims K (1994) Evolving 3d morphology and behavior by competition. In: Brooks RA, Maes P (eds) Proceedings of the fourth international workshop on artificial life. The MIT Press, Cambridge, pp 28–39

  58. Stanley K, Miikkulainen R (2002) Evolving neural networks through augmenting topologies. Evol Comput 10(2):99–127

    Article  Google Scholar 

  59. Stanley KO (2006) Comparing artificial phenotypes with natural biological patterns. In: Proceedings of the genetic and evolutionary computation conference (GECCO) workshop program, New York, NY

  60. Teuber HL (1955) Physiological psychology. Annu Rev Psychol 6(1):267–296

    Article  Google Scholar 

  61. Wagner G, Altenberg L (1996) Complex adaptations and the evolution of evolvability. Evolution 50(3):967–976

    Article  Google Scholar 

  62. Watson R (2005) Modular interdependency in complex dynamical systems. Artif Life 11(4):445–457

    Article  Google Scholar 

  63. Widrow B (1987) The original adaptive neural net broom-balancer. In: International symposium on circuits and systems, pp 351–357

  64. Wieland A (1991) Evolving neural network controllers for unstable systems. In: Proceedings of the international joint conference on neural networks (Seattle, WA). IEEE, Piscataway, pp 667–673

  65. Yao X (1999) Evolving artificial neural networks. Proc IEEE 87(9):1423–1447

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jean-Baptiste Mouret.

Appendices

Appendix A: Results of the Wilcoxon–Mann–Whitley test

See Tables 1, 2 and 3.

Table 1 Cartpole (two tails p values)
Table 2 Robotic arm (two tails p values)
Table 3 Lesion experiments

Appendix B: Complete attribute grammar

figure f
figure g
figure h
figure i

Appendix C: Defined types and functions

Mennag uses a short list of types and functions to handle attributes. Here is a short reference.

Seven basic types are used in the MENNAG grammar:

  • string: character string

  • bitstring: string of bits which can mutate (bit flipping)

  • float: real number

  • evofloat: real number which can mutate (Gaussian mutation)

  • int: integer

  • set: a simple set which can handle any type

  • hash: a simple hash table

A vector of these types can be defined by adding the size enclosed in brackets. For instance, “evofloat[10]” defines a vector of 10 evofloat.

Hash tables are used to emulate a structured type (struct in C). For instance, a neuron is “created” using “createhash(name = “id”, value = CELL.id, name = “spec”, value = CELL.spec)”.

Some simple functions are built-in:

  • union: merge two sets

  • concatenate: concatenate two strings

  • createset: create an empty set

  • createhash: create an empty hash table

  • numericalvalue: convert the argument to a real number

  • prependtoattributes: concatenate a string in front of all the specified entries in all the hash tables contained in a set

  • addattribute: add an entry to every hash tables contained in a set

  • map: apply a functor to every elements of a set

  • aspermutation: compute the permutation corresponding to a vector a real numbers

Appendix D: Experimental setup

4.1 D.1 MENNAG

  • cross rate: 0.3

  • min. number of neurons (random generation): 2

  • max. number of neurons (random generation): 10

  • min. number of connections (random generation): 2

  • min. number of connections (random generation): 12

  • mutation rate (random tree): 0.05

  • insertion mutation rate: 0.2

  • mutation delete rate: 0.1

4.2 D.2 NEAT

  • starting network (cart-pole): each input directly connected to the output

  • parameter file: p2nv.ne (available in NEAT’s source code)

4.3 D.3 ModNet

  • max. number of modules: 10

  • cross rate: 0.6

  • no predefined module in the initial module pool

  • add model module rate: 0.01

  • insert module rate: 0.01

  • delete model module rate: 0.005

  • mutation module rate: 0.05

4.4 D.4 Direct encoding

  • max. number of neurons: 12

  • min. number of neurons: 2

  • max. number of connections: 10

  • min. number of connections: 2

  • add neuron rate: 0.1

  • delete neuron rate: 0.1

  • add connection rate: 0.25

  • delete connection rate: 0.25

  • change connection rate: 0.25

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mouret, JB., Doncieux, S. MENNAG: a modular, regular and hierarchical encoding for neural-networks based on attribute grammars. Evol. Intel. 1, 187–207 (2008). https://doi.org/10.1007/s12065-008-0015-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12065-008-0015-7

Keywords

Navigation