Skip to main content
Erschienen in: Neural Computing and Applications 8/2015

01.11.2015 | Original Article

Designing evolutionary feedforward neural networks using social spider optimization algorithm

verfasst von: Seyedeh Zahra Mirjalili, Shahrzad Saremi, Seyed Mohammad Mirjalili

Erschienen in: Neural Computing and Applications | Ausgabe 8/2015

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Training feedforward neural networks (FNNs) is considered as a challenging task due to the nonlinear nature of this problem and the presence of large number of local solutions. The literature shows that heuristic optimization algorithms are able to tackle these problems much better than the mathematical and deterministic methods. In this paper, we propose a new trainer using the recently proposed heuristic algorithm called social spider optimization (SSO) algorithm. The trained FNN by SSO (FNNSSO) is benchmarked on five standard classification data sets: XOR, balloon, Iris, breast cancer, and heart. The results are verified by the comparison with five other well-known heuristics. The results prove that the proposed FNNSSO is able to provide very promising results compared with other algorithms.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Reed RD, Marks RJ (1998) Neural smithing: supervised learning in feedforward artificial neural networks. Mit Press, Cambridge Reed RD, Marks RJ (1998) Neural smithing: supervised learning in feedforward artificial neural networks. Mit Press, Cambridge
2.
Zurück zum Zitat Caruana R, Niculescu-Mizil (2006) An empirical comparison of supervised learning algorithms. In: Proceedings of the 23rd international conference on Machine learning, pp 161–168 Caruana R, Niculescu-Mizil (2006) An empirical comparison of supervised learning algorithms. In: Proceedings of the 23rd international conference on Machine learning, pp 161–168
3.
Zurück zum Zitat Hinton GE, Sejnowski TJ (1999) Unsupervised learning: foundations of neural computation. MIT press, Cambridge Hinton GE, Sejnowski TJ (1999) Unsupervised learning: foundations of neural computation. MIT press, Cambridge
4.
Zurück zum Zitat Wang D (2001) Unsupervised learning: foundations of neural computation. AI Mag 22:101 Wang D (2001) Unsupervised learning: foundations of neural computation. AI Mag 22:101
5.
Zurück zum Zitat Barto A (1997) Reinforcement learning. Neural systems for control, pp 7–29 Barto A (1997) Reinforcement learning. Neural systems for control, pp 7–29
6.
Zurück zum Zitat Barto AG (1998) Reinforcement learning: an introduction. MIT press, Cambridge Barto AG (1998) Reinforcement learning: an introduction. MIT press, Cambridge
7.
Zurück zum Zitat Bebis G, Georgiopoulos M (1994) Feed-forward neural networks. Potentials, IEEE 13:27–31CrossRef Bebis G, Georgiopoulos M (1994) Feed-forward neural networks. Potentials, IEEE 13:27–31CrossRef
8.
Zurück zum Zitat Hertz J (1991) Introduction to the theory of neural computation, vol 1. Basic Books, New York Hertz J (1991) Introduction to the theory of neural computation, vol 1. Basic Books, New York
9.
Zurück zum Zitat Branke J (1995) Evolutionary algorithms for neural network design and training. In: Proceedings of the first nordic workshop on genetic algorithms and its applications Branke J (1995) Evolutionary algorithms for neural network design and training. In: Proceedings of the first nordic workshop on genetic algorithms and its applications
10.
Zurück zum Zitat Saremi S, Mirjalili S (2013) Integrating chaos to biogeography-based optimization algorithm. Int J Comput Commun Eng 2:655–658CrossRef Saremi S, Mirjalili S (2013) Integrating chaos to biogeography-based optimization algorithm. Int J Comput Commun Eng 2:655–658CrossRef
12.
Zurück zum Zitat Saremi S, Mirjalili S, Lewis A (2014) Biogeography-based optimisation with chaos. Neural Comput Appl 25(5):1077–1097CrossRef Saremi S, Mirjalili S, Lewis A (2014) Biogeography-based optimisation with chaos. Neural Comput Appl 25(5):1077–1097CrossRef
13.
Zurück zum Zitat Saremi S, Mirjalili SM, Mirjalili S (2014) Chaotic krill herd optimization algorithm. Procedia Technol 12:180–185CrossRef Saremi S, Mirjalili SM, Mirjalili S (2014) Chaotic krill herd optimization algorithm. Procedia Technol 12:180–185CrossRef
14.
Zurück zum Zitat Saremi S, Mirjalili SM, Mirjalili S (2014) Unit cell topology optimization of line defect photonic crystal waveguide. Procedia Technol 12:174–179CrossRef Saremi S, Mirjalili SM, Mirjalili S (2014) Unit cell topology optimization of line defect photonic crystal waveguide. Procedia Technol 12:174–179CrossRef
15.
Zurück zum Zitat Mirjalili S, Hashim SM (2011) BMOA: binary magnetic optimization algorithm. 3rd international conference on machine learning and computing (ICMLC 2011), Singapore, pp 201–206 Mirjalili S, Hashim SM (2011) BMOA: binary magnetic optimization algorithm. 3rd international conference on machine learning and computing (ICMLC 2011), Singapore, pp 201–206
16.
Zurück zum Zitat Mirjalili S, Lewis A (2014) Adaptive gbest-guided gravitational search algorithm. Neural Comput Appl 25(7–8):1569–1584CrossRef Mirjalili S, Lewis A (2014) Adaptive gbest-guided gravitational search algorithm. Neural Comput Appl 25(7–8):1569–1584CrossRef
17.
Zurück zum Zitat Mirjalili S, Lewis A, Sadiq AS (2014) Autonomous particles groups for particle swarm optimization. Arab J Sci Eng 39(6):4683–4697CrossRef Mirjalili S, Lewis A, Sadiq AS (2014) Autonomous particles groups for particle swarm optimization. Arab J Sci Eng 39(6):4683–4697CrossRef
18.
Zurück zum Zitat Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61CrossRef Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61CrossRef
19.
Zurück zum Zitat Mirjalili S, Mirjalili SM, Yang XS (2013) Binary bat algorithm. Neural Comput Appl 25(3–4):663–681 Mirjalili S, Mirjalili SM, Yang XS (2013) Binary bat algorithm. Neural Comput Appl 25(3–4):663–681
20.
Zurück zum Zitat Mirjalili S, Rawlins T, Hettenhausen J, Lewis A (2013) A comparison of multi-objective optimisation metaheuristics on the 2D airfoil design problem. ANZIAM J 54:C345–C360MathSciNet Mirjalili S, Rawlins T, Hettenhausen J, Lewis A (2013) A comparison of multi-objective optimisation metaheuristics on the 2D airfoil design problem. ANZIAM J 54:C345–C360MathSciNet
21.
Zurück zum Zitat Mirjalili S, Wang GG, Coelho LDS (2014) Binary optimization using hybrid particle swarm optimization and gravitational search algorithm. Neural Comput Appl 25(6):1423–1435CrossRef Mirjalili S, Wang GG, Coelho LDS (2014) Binary optimization using hybrid particle swarm optimization and gravitational search algorithm. Neural Comput Appl 25(6):1423–1435CrossRef
24.
Zurück zum Zitat Eberhart RC, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science, pp 39–43 Eberhart RC, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science, pp 39–43
26.
Zurück zum Zitat Dorigo M, Birattari M (2010) Ant colony optimization. In: Encyclopedia of machine learning. Springer, New York, pp 36–39 Dorigo M, Birattari M (2010) Ant colony optimization. In: Encyclopedia of machine learning. Springer, New York, pp 36–39
27.
Zurück zum Zitat Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11:341–359MathSciNetCrossRefMATH Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11:341–359MathSciNetCrossRefMATH
29.
Zurück zum Zitat Mirjalili S, Mohd Hashim SZ (2010) A new hybrid PSOGSA algorithm for function optimization. In: IEEE international conference on computer and information application (ICCIA 2010), China, pp 374–377 Mirjalili S, Mohd Hashim SZ (2010) A new hybrid PSOGSA algorithm for function optimization. In: IEEE international conference on computer and information application (ICCIA 2010), China, pp 374–377
30.
Zurück zum Zitat Montana DJ, Davis L (1989) Training feedforward neural networks using genetic algorithms. In: IJCAI, vol 89, pp 762–767 Montana DJ, Davis L (1989) Training feedforward neural networks using genetic algorithms. In: IJCAI, vol 89, pp 762–767
31.
Zurück zum Zitat Belew RK, McInerney J, Schraudolph NN (1990) Evolving networks: using the genetic algorithm with connectionist learning Belew RK, McInerney J, Schraudolph NN (1990) Evolving networks: using the genetic algorithm with connectionist learning
32.
Zurück zum Zitat Leung FH-F, Lam H-K, Ling S-H, Tam PK-S (2003) Tuning of the structure and parameters of a neural network using an improved genetic algorithm. Neural Netw IEEE Trans 14:79–88CrossRef Leung FH-F, Lam H-K, Ling S-H, Tam PK-S (2003) Tuning of the structure and parameters of a neural network using an improved genetic algorithm. Neural Netw IEEE Trans 14:79–88CrossRef
33.
Zurück zum Zitat Kitano H (1994) Neurogenetic learning: an integrated method of designing and training neural networks using genetic algorithms. Phys D 75:225–238CrossRefMATH Kitano H (1994) Neurogenetic learning: an integrated method of designing and training neural networks using genetic algorithms. Phys D 75:225–238CrossRefMATH
34.
Zurück zum Zitat Chen M-S, Liao FH (1998) Neural networks training using genetic algorithms. In: IEEE international conference on systems, man, and cybernetics, 1998, vol 3. IEEE, pp 2436–2441 Chen M-S, Liao FH (1998) Neural networks training using genetic algorithms. In: IEEE international conference on systems, man, and cybernetics, 1998, vol 3. IEEE, pp 2436–2441
35.
Zurück zum Zitat Whitley D, Starkweather T, Bogart C (1990) Genetic algorithms and neural networks: optimizing connections and connectivity. Parallel Comput 14:347–361CrossRef Whitley D, Starkweather T, Bogart C (1990) Genetic algorithms and neural networks: optimizing connections and connectivity. Parallel Comput 14:347–361CrossRef
36.
Zurück zum Zitat Meissner M, Schmuker M, Schneider G (2006) Optimized particle swarm optimization (OPSO) and its application to artificial neural network training. BMC Bioinform 7:125CrossRef Meissner M, Schmuker M, Schneider G (2006) Optimized particle swarm optimization (OPSO) and its application to artificial neural network training. BMC Bioinform 7:125CrossRef
37.
Zurück zum Zitat Gudise VG, Venayagamoorthy GK (2003) Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In: Swarm intelligence symposium, 2003. SIS’03. Proceedings of the 2003 IEEE, pp 110–117 Gudise VG, Venayagamoorthy GK (2003) Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In: Swarm intelligence symposium, 2003. SIS’03. Proceedings of the 2003 IEEE, pp 110–117
38.
Zurück zum Zitat Al-kazemi B, Mohan CK (2002). Training feedforward neural networks using multi-phase particle swarm optimization. In: Neural information processing, 2002. ICONIP’02. Proceedings of the 9th international conference on, 2002, pp 2615–2619 Al-kazemi B, Mohan CK (2002). Training feedforward neural networks using multi-phase particle swarm optimization. In: Neural information processing, 2002. ICONIP’02. Proceedings of the 9th international conference on, 2002, pp 2615–2619
39.
Zurück zum Zitat Zhang J-R, Zhang J, Lok T-M, Lyu MR (2007) A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training. Appl Math Comput 185:1026–1037CrossRefMATH Zhang J-R, Zhang J, Lok T-M, Lyu MR (2007) A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training. Appl Math Comput 185:1026–1037CrossRefMATH
40.
Zurück zum Zitat Mendes R, Cortez P, Rocha M, Neves J (2002) Particle swarms for feedforward neural network training. Learning 6(1) Mendes R, Cortez P, Rocha M, Neves J (2002) Particle swarms for feedforward neural network training. Learning 6(1)
41.
Zurück zum Zitat Ismail A, Engelbrecht AP (1999) Training product units in feedforward neural networks using particle swarm optimization. In: Proceedings of the international conference on artificial intelligence, pp 36–40 Ismail A, Engelbrecht AP (1999) Training product units in feedforward neural networks using particle swarm optimization. In: Proceedings of the international conference on artificial intelligence, pp 36–40
42.
Zurück zum Zitat Socha K, Blum C (2007) An ant colony optimization algorithm for continuous optimization: application to feed-forward neural network training. Neural Comput Appl 16:235–247CrossRef Socha K, Blum C (2007) An ant colony optimization algorithm for continuous optimization: application to feed-forward neural network training. Neural Comput Appl 16:235–247CrossRef
43.
Zurück zum Zitat Blum C, Socha K (2005) Training feed-forward neural networks with ant colony optimization: An application to pattern classification. In: Hybrid Intelligent Systems, 2005. HIS’05. Fifth international conference on, p. 6 Blum C, Socha K (2005) Training feed-forward neural networks with ant colony optimization: An application to pattern classification. In: Hybrid Intelligent Systems, 2005. HIS’05. Fifth international conference on, p. 6
44.
Zurück zum Zitat Hong B-R, Jin F-H, Gao Q-J (2003) Multi-layer feedforward neural network based on ant colony system. J Harbin Inst Technol 7:016 Hong B-R, Jin F-H, Gao Q-J (2003) Multi-layer feedforward neural network based on ant colony system. J Harbin Inst Technol 7:016
45.
Zurück zum Zitat Liu YP, Wu MG, Qian JX (2006) Evolving neural networks using the hybrid of ant colony optimization and BP algorithms. In: Advances in neural networks-ISNN 2006, ed: Springer, pp 714–722 Liu YP, Wu MG, Qian JX (2006) Evolving neural networks using the hybrid of ant colony optimization and BP algorithms. In: Advances in neural networks-ISNN 2006, ed: Springer, pp 714–722
46.
Zurück zum Zitat Ilonen J, Kamarainen J-K, Lampinen J (2003) Differential evolution training algorithm for feed-forward neural networks. Neural Process Lett 17:93–105CrossRef Ilonen J, Kamarainen J-K, Lampinen J (2003) Differential evolution training algorithm for feed-forward neural networks. Neural Process Lett 17:93–105CrossRef
47.
Zurück zum Zitat Slowik A, Bialko M (2008) Training of artificial neural networks using differential evolution algorithm. In: Human system interactions. Conference on, pp 60–65 Slowik A, Bialko M (2008) Training of artificial neural networks using differential evolution algorithm. In: Human system interactions. Conference on, pp 60–65
48.
Zurück zum Zitat Karaboga D, Akay B, Ozturk C (2007) Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks. In Modeling decisions for artificial intelligence, ed: Springer, 2007, pp 318–329 Karaboga D, Akay B, Ozturk C (2007) Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks. In Modeling decisions for artificial intelligence, ed: Springer, 2007, pp 318–329
49.
Zurück zum Zitat Ozturk C, Karaboga D (2011) Hybrid artificial bee colony algorithm for neural network training. In: IEEE congress on evolutionary computation (CEC), 2011. IEEE, New Orleans, pp 84–88 Ozturk C, Karaboga D (2011) Hybrid artificial bee colony algorithm for neural network training. In: IEEE congress on evolutionary computation (CEC), 2011. IEEE, New Orleans, pp 84–88
50.
Zurück zum Zitat Mirjalili S, Mohd SZ (2012) Hashim, and H. Moradian Sardroudi, “Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm,”. Appl Math Comput 218:11125–11137MathSciNetCrossRefMATH Mirjalili S, Mohd SZ (2012) Hashim, and H. Moradian Sardroudi, “Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm,”. Appl Math Comput 218:11125–11137MathSciNetCrossRefMATH
51.
Zurück zum Zitat Ghalambaz M, Noghrehabadi A, Behrang M, Assareh E, Ghanbarzadeh A, Hedayat N (2011) A hybrid neural network and gravitational search algorithm (HNNGSA) method to solve well known Wessinger’s equation. World Acad Sci Eng Technol 73:803–807 Ghalambaz M, Noghrehabadi A, Behrang M, Assareh E, Ghanbarzadeh A, Hedayat N (2011) A hybrid neural network and gravitational search algorithm (HNNGSA) method to solve well known Wessinger’s equation. World Acad Sci Eng Technol 73:803–807
52.
Zurück zum Zitat Battiti R, Tecchiolli G (1995) Training neural nets with the reactive tabu search. Neural Netw IEEE Trans 6:1185–1200CrossRef Battiti R, Tecchiolli G (1995) Training neural nets with the reactive tabu search. Neural Netw IEEE Trans 6:1185–1200CrossRef
53.
Zurück zum Zitat Kalinli A, Karaboga D (2004) Training recurrent neural networks by using parallel tabu search algorithm based on crossover operation. Eng Appl Artif Intell 17:529–542CrossRef Kalinli A, Karaboga D (2004) Training recurrent neural networks by using parallel tabu search algorithm based on crossover operation. Eng Appl Artif Intell 17:529–542CrossRef
54.
Zurück zum Zitat Mirjalili S, Mirjalili SM, Lewis A (2014) Let a biogeography-based optimizer train your multi-layer perceptron. Inf Sci 269:188–209MathSciNetCrossRef Mirjalili S, Mirjalili SM, Lewis A (2014) Let a biogeography-based optimizer train your multi-layer perceptron. Inf Sci 269:188–209MathSciNetCrossRef
55.
Zurück zum Zitat Wienholt W (1993) Minimizing the system error in feedforward neural networks with evolution strategy. In: ICANN’93. Springer, London, pp. 490–493 Wienholt W (1993) Minimizing the system error in feedforward neural networks with evolution strategy. In: ICANN’93. Springer, London, pp. 490–493
56.
Zurück zum Zitat Mirjalili S, Sadiq AS (2011) Magnetic optimization algorithm for training multi layer perceptron. In: Communication software and networks (ICCSN), 2011 IEEE 3rd international conference on, 2011, pp 42–46 Mirjalili S, Sadiq AS (2011) Magnetic optimization algorithm for training multi layer perceptron. In: Communication software and networks (ICCSN), 2011 IEEE 3rd international conference on, 2011, pp 42–46
58.
Zurück zum Zitat Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. Evolutionary Comput IEEE Trans 1:67–82CrossRef Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. Evolutionary Comput IEEE Trans 1:67–82CrossRef
59.
Zurück zum Zitat Cuevas E, Cienfuegos M (2014) A new algorithm inspired in the behavior of the social-spider for constrained optimization. Exp Syst Appl: Int J. 41:412–425CrossRef Cuevas E, Cienfuegos M (2014) A new algorithm inspired in the behavior of the social-spider for constrained optimization. Exp Syst Appl: Int J. 41:412–425CrossRef
60.
Zurück zum Zitat Cuevas E, Cienfuegos M, Zaldívar D, Pérez-Cisneros M (2013) A swarm optimization algorithm inspired in the behavior of the social-spider. Expert Syst Appl 40:6374–6384CrossRef Cuevas E, Cienfuegos M, Zaldívar D, Pérez-Cisneros M (2013) A swarm optimization algorithm inspired in the behavior of the social-spider. Expert Syst Appl 40:6374–6384CrossRef
61.
Zurück zum Zitat Pereira L, Rodrigues D, Ribeiro P, Papa J, Weber SA (2014) Social-spider optimization-based artificial neural networks training and its applications for parkinson’s disease identification. In: Computer-based medical systems (CBMS), 2014 IEEE 27th International symposium on, 2014, pp 14–17 Pereira L, Rodrigues D, Ribeiro P, Papa J, Weber SA (2014) Social-spider optimization-based artificial neural networks training and its applications for parkinson’s disease identification. In: Computer-based medical systems (CBMS), 2014 IEEE 27th International symposium on, 2014, pp 14–17
62.
Zurück zum Zitat Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2:359–366CrossRef Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2:359–366CrossRef
63.
Zurück zum Zitat Asuncion A, Newman D (2007) UCI machine learning repository Asuncion A, Newman D (2007) UCI machine learning repository
64.
Zurück zum Zitat Wdaa I, Sttar A (2008) Differential evolution for neural networks learning enhancement. Universiti Teknologi Malaysia, Faculty of Computer Science and Information System Wdaa I, Sttar A (2008) Differential evolution for neural networks learning enhancement. Universiti Teknologi Malaysia, Faculty of Computer Science and Information System
Metadaten
Titel
Designing evolutionary feedforward neural networks using social spider optimization algorithm
verfasst von
Seyedeh Zahra Mirjalili
Shahrzad Saremi
Seyed Mohammad Mirjalili
Publikationsdatum
01.11.2015
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 8/2015
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-015-1847-6

Weitere Artikel der Ausgabe 8/2015

Neural Computing and Applications 8/2015 Zur Ausgabe