Skip to main content
Top
Published in: Cluster Computing 4/2019

15-02-2019

Evolving neural networks using bird swarm algorithm for data classification and regression applications

Authors: Ibrahim Aljarah, Hossam Faris, Seyedali Mirjalili, Nailah Al-Madi, Alaa Sheta, Majdi Mafarja

Published in: Cluster Computing | Issue 4/2019

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This work proposes a new evolutionary multilayer perceptron neural networks using the recently proposed Bird Swarm Algorithm. The problem of finding the optimal connection weights and neuron biases is first formulated as a minimization problem with mean square error as the objective function. The BSA is then used to estimate the global optimum for this problem. A comprehensive comparative study is conducted using 13 classification datasets, three function approximation datasets, and one real-world case study (Tennessee Eastman chemical reactor problem) to benchmark the performance of the proposed evolutionary neural network. The results are compared with well-regarded conventional and evolutionary trainers and show that the proposed method provides very competitive results. The paper also considers a deep analysis of the results, revealing the flexibility, robustness, and reliability of the proposed trainer when applied to different datasets.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Adwan, O., Faris, H., Jaradat, K., Harfoushi, O., Ghatasheh, N.: Predicting customer churn in telecom industry using multilayer preceptron neural networks: modeling and analysis. Life Sci. J. 11(3), 75–81 (2014) Adwan, O., Faris, H., Jaradat, K., Harfoushi, O., Ghatasheh, N.: Predicting customer churn in telecom industry using multilayer preceptron neural networks: modeling and analysis. Life Sci. J. 11(3), 75–81 (2014)
2.
go back to reference Al-Hiary, H., Sheta, A., Ayesh, A.: Identification of a chemical process reactor using soft computing techniques. In: Proceedings of the 2008 International Conference on Fuzzy Systems (FUZZ2008) within the 2008 IEEE World Congress on Computational Intelligence (WCCI2008), Hong Kong, 1–6 June, pp. 845–653 (2008) Al-Hiary, H., Sheta, A., Ayesh, A.: Identification of a chemical process reactor using soft computing techniques. In: Proceedings of the 2008 International Conference on Fuzzy Systems (FUZZ2008) within the 2008 IEEE World Congress on Computational Intelligence (WCCI2008), Hong Kong, 1–6 June, pp. 845–653 (2008)
3.
go back to reference Al-Shayea, Q.K.: Artificial neural networks in medical diagnosis. Int. J. Comput. Sci. Issues 8(2), 150–154 (2011) Al-Shayea, Q.K.: Artificial neural networks in medical diagnosis. Int. J. Comput. Sci. Issues 8(2), 150–154 (2011)
4.
go back to reference Alboaneen, D.A., Tianfield, H., Zhang, Y.: Glowworm swarm optimisation for training multi-layer perceptrons. In: Proceedings of the Fourth IEEE/ACM International Conference on Big Data Computing, Applications and Technologies, BDCAT ’17, pp. 131–138, New York, NY (2017). ACM Alboaneen, D.A., Tianfield, H., Zhang, Y.: Glowworm swarm optimisation for training multi-layer perceptrons. In: Proceedings of the Fourth IEEE/ACM International Conference on Big Data Computing, Applications and Technologies, BDCAT ’17, pp. 131–138, New York, NY (2017). ACM
5.
go back to reference Aljarah, I., Ludwig, S.A.: A mapreduce based glowworm swarm optimization approach for multimodal functions. In: 2013 IEEE Symposium on Swarm Intelligence (SIS), pp. 22–31. IEEE (2013) Aljarah, I., Ludwig, S.A.: A mapreduce based glowworm swarm optimization approach for multimodal functions. In: 2013 IEEE Symposium on Swarm Intelligence (SIS), pp. 22–31. IEEE (2013)
6.
go back to reference Aljarah, I., Ludwig, S.A.: Towards a scalable intrusion detection system based on parallel pso clustering using MapReduce. In: Proceedings of the 15th Annual Conference Companion on Genetic and Evolutionary Computation, pp. 169–170. ACM (2013) Aljarah, I., Ludwig, S.A.: Towards a scalable intrusion detection system based on parallel pso clustering using MapReduce. In: Proceedings of the 15th Annual Conference Companion on Genetic and Evolutionary Computation, pp. 169–170. ACM (2013)
7.
go back to reference Aljarah, I., Ludwig, S.A.: A scalable mapreduce-enabled glowworm swarm optimization approach for high dimensional multimodal functions. Int. J. Swarm Intell. Res. (IJSIR) 7(1), 32–54 (2016) Aljarah, I., Ludwig, S.A.: A scalable mapreduce-enabled glowworm swarm optimization approach for high dimensional multimodal functions. Int. J. Swarm Intell. Res. (IJSIR) 7(1), 32–54 (2016)
8.
go back to reference Aljarah, I., Faris, H., Mirjalili, S.: Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Comput. 22(1), 1–15 (2018) Aljarah, I., Faris, H., Mirjalili, S.: Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Comput. 22(1), 1–15 (2018)
9.
go back to reference Aljarah, I., Faris, H., Mirjalili, S., Al-Madi, N.: Training radial basis function networks using biogeography-based optimizer. Neural Comput. Appl. 29(7), 529–553 (2018) Aljarah, I., Faris, H., Mirjalili, S., Al-Madi, N.: Training radial basis function networks using biogeography-based optimizer. Neural Comput. Appl. 29(7), 529–553 (2018)
10.
go back to reference Amaldi, E., Mayoraz, E., de Werra, D.: A review of combinatorial problems arising in feedforward neural network design. Discret. Appl. Math. 52(2), 111–138 (1994)MathSciNetMATH Amaldi, E., Mayoraz, E., de Werra, D.: A review of combinatorial problems arising in feedforward neural network design. Discret. Appl. Math. 52(2), 111–138 (1994)MathSciNetMATH
11.
go back to reference Arifovic, J., Gencay, R.: Using genetic algorithms to select architecture of a feedforward artificial neural network. Physica A 289(3), 574–594 (2001)MATH Arifovic, J., Gencay, R.: Using genetic algorithms to select architecture of a feedforward artificial neural network. Physica A 289(3), 574–594 (2001)MATH
12.
go back to reference Barton, I.P., Martinsen, S.W.: Equation-oriented simulator training. In Proceedings of the American Control Conference, Albuquerque, New Mexico, pp. 2960–2965 (1997) Barton, I.P., Martinsen, S.W.: Equation-oriented simulator training. In Proceedings of the American Control Conference, Albuquerque, New Mexico, pp. 2960–2965 (1997)
13.
go back to reference Basheer, I.A., Hajmeer, M.: Artificial neural networks: fundamentals, computing, design, and application. J. Microbiol. Methods 43(1), 3–31 (2000) Basheer, I.A., Hajmeer, M.: Artificial neural networks: fundamentals, computing, design, and application. J. Microbiol. Methods 43(1), 3–31 (2000)
14.
go back to reference Bathelt, A., Ricker, N.L., Jelali, M.: Revision of the Tennessee Eastman process model. IFAC-PapersOnLine 48(8), 309–314 (2015) Bathelt, A., Ricker, N.L., Jelali, M.: Revision of the Tennessee Eastman process model. IFAC-PapersOnLine 48(8), 309–314 (2015)
15.
go back to reference Bebis, G., Georgiopoulos, M.: Feed-forward neural networks. IEEE Potentials 13(4), 27–31 (1994) Bebis, G., Georgiopoulos, M.: Feed-forward neural networks. IEEE Potentials 13(4), 27–31 (1994)
16.
go back to reference Bhat, N., McAvoy, T.J.: Use of neural nets for dynamic modeling and control of chemical process systems. Comput. Chem. Eng. 14, 573–582 (1990) Bhat, N., McAvoy, T.J.: Use of neural nets for dynamic modeling and control of chemical process systems. Comput. Chem. Eng. 14, 573–582 (1990)
17.
go back to reference Bornholdt, S., Graudenz, D.: General asymmetric neural networks and structure design by genetic algorithms. Neural Netw. 5(2), 327–334 (1992) Bornholdt, S., Graudenz, D.: General asymmetric neural networks and structure design by genetic algorithms. Neural Netw. 5(2), 327–334 (1992)
18.
go back to reference BoussaïD, I., Lepagnot, J., Siarry, P.: A survey on optimization metaheuristics. Inf. Sci. 237, 82–117 (2013)MathSciNetMATH BoussaïD, I., Lepagnot, J., Siarry, P.: A survey on optimization metaheuristics. Inf. Sci. 237, 82–117 (2013)MathSciNetMATH
19.
go back to reference Brajevic, I., Tuba, M.: Training feed-forward neural networks using firefly algorithm. In: Proceedings of the 12th International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases (AIKED’13), pp. 156–161 (2013) Brajevic, I., Tuba, M.: Training feed-forward neural networks using firefly algorithm. In: Proceedings of the 12th International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases (AIKED’13), pp. 156–161 (2013)
20.
go back to reference Buscema, M.: Back propagation neural networks. Subst. Use Misuse 33(2), 233–270 (1998) Buscema, M.: Back propagation neural networks. Subst. Use Misuse 33(2), 233–270 (1998)
21.
go back to reference Chen, C.L.P.: A rapid supervised learning neural network for function interpolation and approximation. IEEE Trans. Neural Netw. 7(5), 1220–1230 (1996)MathSciNet Chen, C.L.P.: A rapid supervised learning neural network for function interpolation and approximation. IEEE Trans. Neural Netw. 7(5), 1220–1230 (1996)MathSciNet
22.
go back to reference Downs, J.J., Vogel, E.F.: A plant-wide industrial process control problem. Comput. Chem. Eng. 17(3), 245–255 (1993) Downs, J.J., Vogel, E.F.: A plant-wide industrial process control problem. Comput. Chem. Eng. 17(3), 245–255 (1993)
23.
go back to reference Engelbrecht, A.P.: Supervised learning neural networks. Computational Intelligence: An Introduction, 2nd edn., pp. 27-54. Wiley, Singapore (2007) Engelbrecht, A.P.: Supervised learning neural networks. Computational Intelligence: An Introduction, 2nd edn., pp. 27-54. Wiley, Singapore (2007)
24.
go back to reference Faris, H., Alkasassbeh, M., Rodan, A.: Artificial neural networks for surface ozone prediction: models and analysis. Pol. J. Environ. Stud. 23(2), 341–348 (2014) Faris, H., Alkasassbeh, M., Rodan, A.: Artificial neural networks for surface ozone prediction: models and analysis. Pol. J. Environ. Stud. 23(2), 341–348 (2014)
25.
go back to reference Faris, H., Aljarah, I., et al.: Optimizing feedforward neural networks using krill herd algorithm for e-mail spam detection. In: 2015 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT), pp. 1–5. IEEE (2015) Faris, H., Aljarah, I., et al.: Optimizing feedforward neural networks using krill herd algorithm for e-mail spam detection. In: 2015 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT), pp. 1–5. IEEE (2015)
26.
go back to reference Faris, H., Aljarah, I., Al-Madi, N., Mirjalili, S.: Optimizing the learning process of feedforward neural networks using lightning search algorithm. Int. J. Artif. Intell. Tools 25(06), 1650033 (2016) Faris, H., Aljarah, I., Al-Madi, N., Mirjalili, S.: Optimizing the learning process of feedforward neural networks using lightning search algorithm. Int. J. Artif. Intell. Tools 25(06), 1650033 (2016)
27.
go back to reference Faris, H., Aljarah, I., Mirjalili, S.: Training feedforward neural networks using multi-verse optimizer for binary classification problems. Applied Intelligence, pp. 1–11 (2016) Faris, H., Aljarah, I., Mirjalili, S.: Training feedforward neural networks using multi-verse optimizer for binary classification problems. Applied Intelligence, pp. 1–11 (2016)
28.
go back to reference Faris, H., Aljarah, I., Mirjalili, S.: Evolving radial basis function networks using moth–flame optimizer. In: Handbook of Neural Computation, pp. 537–550. Elsevier (2017) Faris, H., Aljarah, I., Mirjalili, S.: Evolving radial basis function networks using moth–flame optimizer. In: Handbook of Neural Computation, pp. 537–550. Elsevier (2017)
29.
go back to reference Faris, H., Aljarah, I., Mirjalili, S.: Improved monarch butterfly optimization for unconstrained global search and neural network training. Appl. Intell. 48(2), 445–464 (2018) Faris, H., Aljarah, I., Mirjalili, S.: Improved monarch butterfly optimization for unconstrained global search and neural network training. Appl. Intell. 48(2), 445–464 (2018)
30.
go back to reference Galić, E., Höhfeld, M.: Improving the generalization performance of multi-layer-perceptrons with population-based incremental learning. In: International Conference on Parallel Problem Solving from Nature, pp. 740–750. Springer (1996) Galić, E., Höhfeld, M.: Improving the generalization performance of multi-layer-perceptrons with population-based incremental learning. In: International Conference on Parallel Problem Solving from Nature, pp. 740–750. Springer (1996)
32.
go back to reference Goerick, C., Rodemann, T.: Evolution strategies: an alternative to gradient-based learning. In: Proceedings of the International Conference on Engineering Applications of Neural Networks, vol. 1, pp. 479–482 (1996) Goerick, C., Rodemann, T.: Evolution strategies: an alternative to gradient-based learning. In: Proceedings of the International Conference on Engineering Applications of Neural Networks, vol. 1, pp. 479–482 (1996)
33.
go back to reference Goldberg, D.E. et al.: Genetic Algorithms in Search Optimization and Machine Learning, vol. 412. Addison-Wesley, Reading (1989) Goldberg, D.E. et al.: Genetic Algorithms in Search Optimization and Machine Learning, vol. 412. Addison-Wesley, Reading (1989)
34.
go back to reference Golfinopoulos, E., Tourville, J.A., Guenther, F.H.: The integration of large-scale neural network modeling and functional brain imaging in speech motor control. Neuroimage 52(3), 862–874 (2010) Golfinopoulos, E., Tourville, J.A., Guenther, F.H.: The integration of large-scale neural network modeling and functional brain imaging in speech motor control. Neuroimage 52(3), 862–874 (2010)
35.
go back to reference Gupta, J.N.D., Sexton, R.S.: Comparing backpropagation with a genetic algorithm for neural network training. Omega 27(6), 679–684 (1999) Gupta, J.N.D., Sexton, R.S.: Comparing backpropagation with a genetic algorithm for neural network training. Omega 27(6), 679–684 (1999)
36.
go back to reference Gupta, M.M., Jin, L., Homma, N.: Radial basis function neural networks. In: Static and Dynamic Neural Networks: From Fundamentals to Advanced Theory, pp. 223–252 (2003) Gupta, M.M., Jin, L., Homma, N.: Radial basis function neural networks. In: Static and Dynamic Neural Networks: From Fundamentals to Advanced Theory, pp. 223–252 (2003)
37.
go back to reference Hansel, D., Sompolinsky, H.: Learning from examples in a single-layer neural network. EPL Europhys. Lett. 11(7), 687 (1990) Hansel, D., Sompolinsky, H.: Learning from examples in a single-layer neural network. EPL Europhys. Lett. 11(7), 687 (1990)
39.
go back to reference Ho, Y.-C., Pepyne, D.L.: Simple explanation of the no-free-lunch theorem and its implications. J. Optim. Theory Appl. 115(3), 549–570 (2002)MathSciNetMATH Ho, Y.-C., Pepyne, D.L.: Simple explanation of the no-free-lunch theorem and its implications. J. Optim. Theory Appl. 115(3), 549–570 (2002)MathSciNetMATH
40.
go back to reference Hush, D.R., Horne, B.G.: Progress in supervised neural networks. IEEE Signal Process. Mag. 10(1), 8–39 (1993) Hush, D.R., Horne, B.G.: Progress in supervised neural networks. IEEE Signal Process. Mag. 10(1), 8–39 (1993)
41.
go back to reference Hwang, Y.-S., Bang, S.-Y.: An efficient method to construct a radial basis function neural network classifier. Neural Netw. 10(8), 1495–1503 (1997) Hwang, Y.-S., Bang, S.-Y.: An efficient method to construct a radial basis function neural network classifier. Neural Netw. 10(8), 1495–1503 (1997)
42.
go back to reference Ilonen, J., Kamarainen, J.-K., Lampinen, J.: Differential evolution training algorithm for feed-forward neural networks. Neural Process. Lett. 17(1), 93–105 (2003) Ilonen, J., Kamarainen, J.-K., Lampinen, J.: Differential evolution training algorithm for feed-forward neural networks. Neural Process. Lett. 17(1), 93–105 (2003)
43.
go back to reference Juricek, B.C., Seborg, D.E., Larimore, W.E.: Identification of the tennessee eastman challenge process with subspace methods. Control Eng. Pract. 9(12), 1337–1351 (2001) Juricek, B.C., Seborg, D.E., Larimore, W.E.: Identification of the tennessee eastman challenge process with subspace methods. Control Eng. Pract. 9(12), 1337–1351 (2001)
44.
go back to reference Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996) Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)
45.
go back to reference Karaboga, D., Akay, B., Ozturk, C.: Artificial bee colony (abc) optimization algorithm for training feed-forward neural networks. In: International Conference on Modeling Decisions for Artificial Intelligence, pp. 318–329. Springer (2007) Karaboga, D., Akay, B., Ozturk, C.: Artificial bee colony (abc) optimization algorithm for training feed-forward neural networks. In: International Conference on Modeling Decisions for Artificial Intelligence, pp. 318–329. Springer (2007)
46.
go back to reference Karim, M.N., Rivera, S.L.: Artificial neural networks in bioprocess state estimation. Adv. Biochem. Eng. Biotechnol. 46, 1–31 (1992) Karim, M.N., Rivera, S.L.: Artificial neural networks in bioprocess state estimation. Adv. Biochem. Eng. Biotechnol. 46, 1–31 (1992)
47.
go back to reference Khan, K., Sahai, A.: A comparison of ba, ga, pso, bp and lm for training feed forward neural networks in e-learning context. Int. J. Intell. Syst. Appl. 4(7), 23 (2012) Khan, K., Sahai, A.: A comparison of ba, ga, pso, bp and lm for training feed forward neural networks in e-learning context. Int. J. Intell. Syst. Appl. 4(7), 23 (2012)
48.
go back to reference Kowalski, P.A., Łukasik, S.: Training neural networks with krill herd algorithm. Neural Process. Lett. 44, 5–17 (2015) Kowalski, P.A., Łukasik, S.: Training neural networks with krill herd algorithm. Neural Process. Lett. 44, 5–17 (2015)
49.
go back to reference Kruse, R., Borgelt, C., Klawonn, F., Moewes, C., Steinbrecher, M., Held, P.: Multi-layer perceptrons. In: Computational Intelligence, pp. 47–81. Springer (2013) Kruse, R., Borgelt, C., Klawonn, F., Moewes, C., Steinbrecher, M., Held, P.: Multi-layer perceptrons. In: Computational Intelligence, pp. 47–81. Springer (2013)
50.
go back to reference Larochelle, H., Bengio, Y., Louradour, J., Lamblin, P.: Exploring strategies for training deep neural networks. J. Mach. Learn. Res. 10, 1–40 (2009)MATH Larochelle, H., Bengio, Y., Louradour, J., Lamblin, P.: Exploring strategies for training deep neural networks. J. Mach. Learn. Res. 10, 1–40 (2009)MATH
51.
go back to reference Leonard, J., Kramer, M.A.: Improvement of the Backpropagation algorithm for training neural networks. Comput. Chem. Eng. 14, 337–343 (1990) Leonard, J., Kramer, M.A.: Improvement of the Backpropagation algorithm for training neural networks. Comput. Chem. Eng. 14, 337–343 (1990)
52.
go back to reference Leshno, M., Lin, V.Y., Pinkus, A., Schocken, S.: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 6(6), 861–867 (1993) Leshno, M., Lin, V.Y., Pinkus, A., Schocken, S.: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 6(6), 861–867 (1993)
53.
go back to reference Leung, F.H.-F., Lam, H.-K., Ling, S.-H., Tam, P.K.-S.: Tuning of the structure and parameters of a neural network using an improved genetic algorithm. IEEE Trans. Neural Netw. 14(1), 79–88 (2003) Leung, F.H.-F., Lam, H.-K., Ling, S.-H., Tam, P.K.-S.: Tuning of the structure and parameters of a neural network using an improved genetic algorithm. IEEE Trans. Neural Netw. 14(1), 79–88 (2003)
54.
go back to reference Lichman, M.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine (2013) Lichman, M.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine (2013)
55.
go back to reference Lippmann, R.P.: Pattern classification using neural networks. IEEE Commun. Mag. 27(11), 47–50 (1989) Lippmann, R.P.: Pattern classification using neural networks. IEEE Commun. Mag. 27(11), 47–50 (1989)
56.
go back to reference Lo, S.-C.B., Chan, H.-P., Lin, J.-S., Li, H., Freedman, M.T., Mun, S.K.: Artificial convolution neural network for medical image pattern recognition. Neural Netw. 8(7), 1201–1214 (1995) Lo, S.-C.B., Chan, H.-P., Lin, J.-S., Li, H., Freedman, M.T., Mun, S.K.: Artificial convolution neural network for medical image pattern recognition. Neural Netw. 8(7), 1201–1214 (1995)
57.
go back to reference Mavrovouniotis, M., Yang, S.: Training neural networks with ant colony optimization algorithms for pattern classification. Soft Comput. 19(6), 1511–1522 (2015) Mavrovouniotis, M., Yang, S.: Training neural networks with ant colony optimization algorithms for pattern classification. Soft Comput. 19(6), 1511–1522 (2015)
58.
go back to reference Meissner, M., Schmuker, M., Schneider, G.: Optimized particle swarm optimization (OPSO) and its application to artificial neural network training. BMC Bioinform. 7(1), 125 (2006) Meissner, M., Schmuker, M., Schneider, G.: Optimized particle swarm optimization (OPSO) and its application to artificial neural network training. BMC Bioinform. 7(1), 125 (2006)
59.
go back to reference Melin, P., Castillo, O.: Unsupervised learning neural networks. In: Hybrid Intelligent Systems for Pattern Recognition Using Soft Computing, pp. 85–107. Springer (2005) Melin, P., Castillo, O.: Unsupervised learning neural networks. In: Hybrid Intelligent Systems for Pattern Recognition Using Soft Computing, pp. 85–107. Springer (2005)
61.
go back to reference Merkl, D., Rauber, A.: Document classification with unsupervised artificial neural networks. In: Soft Computing in Information Retrieval, pp. 102–121. Springer (2000) Merkl, D., Rauber, A.: Document classification with unsupervised artificial neural networks. In: Soft Computing in Information Retrieval, pp. 102–121. Springer (2000)
62.
go back to reference Mezura-Montes, E., Velázquez-Reyes, J., Coello Coello, C.A.: A comparative study of differential evolution variants for global optimization. In: Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, pp. 485–492. ACM (2006) Mezura-Montes, E., Velázquez-Reyes, J., Coello Coello, C.A.: A comparative study of differential evolution variants for global optimization. In: Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, pp. 485–492. ACM (2006)
63.
go back to reference Mirjalili, S.: How effective is the grey wolf optimizer in training multi-layer perceptrons. Appl. Intell. 43(1), 150–161 (2015) Mirjalili, S.: How effective is the grey wolf optimizer in training multi-layer perceptrons. Appl. Intell. 43(1), 150–161 (2015)
64.
go back to reference Mirjalili, S., Mirjalili, S.M., Lewis, A.: Let a biogeography-based optimizer train your multi-layer perceptron. Inf. Sci. 269, 188–209 (2014)MathSciNet Mirjalili, S., Mirjalili, S.M., Lewis, A.: Let a biogeography-based optimizer train your multi-layer perceptron. Inf. Sci. 269, 188–209 (2014)MathSciNet
65.
go back to reference Mitchell, T.M: Artificial neural networks. Machine Learning, pp. 81–127 (1997) Mitchell, T.M: Artificial neural networks. Machine Learning, pp. 81–127 (1997)
66.
go back to reference Montana, D.J., Davis, L.: Training feedforward neural networks using genetic algorithms. IJCAI 89, 762–767 (1989)MATH Montana, D.J., Davis, L.: Training feedforward neural networks using genetic algorithms. IJCAI 89, 762–767 (1989)MATH
67.
go back to reference Nahas, E.P., Henson, M.A., Seborg, D.E.: Nonlinear internal model control strategy for neural network models. Comput. Chem. Eng. 16, 1039–1057 (1992) Nahas, E.P., Henson, M.A., Seborg, D.E.: Nonlinear internal model control strategy for neural network models. Comput. Chem. Eng. 16, 1039–1057 (1992)
68.
go back to reference Nawi, N.M., Khan, A., Rehman, M.Z., Tutut H., Mustafa, M.D.: Comparing performances of cuckoo search based neural networks. In: Recent Advances on Soft Computing and Data Mining, pp. 163–172. Springer (2014) Nawi, N.M., Khan, A., Rehman, M.Z., Tutut H., Mustafa, M.D.: Comparing performances of cuckoo search based neural networks. In: Recent Advances on Soft Computing and Data Mining, pp. 163–172. Springer (2014)
69.
go back to reference Parisi, R., Di Claudio, E.D., Lucarelli, G., Orlandi, G.: Car plate recognition by neural networks and image processing. In: Proceedings of the 1998 IEEE International Symposium on Circuits and Systems, 1998. ISCAS’98, vol. 3, pp. 195–198. IEEE (1998) Parisi, R., Di Claudio, E.D., Lucarelli, G., Orlandi, G.: Car plate recognition by neural networks and image processing. In: Proceedings of the 1998 IEEE International Symposium on Circuits and Systems, 1998. ISCAS’98, vol. 3, pp. 195–198. IEEE (1998)
70.
go back to reference Pascanu, R., Mikolov, T., Bengio, Y.: On the difficulty of training recurrent neural networks. ICML 3(28), 1310–1318 (2013) Pascanu, R., Mikolov, T., Bengio, Y.: On the difficulty of training recurrent neural networks. ICML 3(28), 1310–1318 (2013)
71.
go back to reference Principe, J.C., Fancourt, C.L.: Artificial neural networks. In: Pardalos, P.M., Romejin, H.E. (eds.) Handbook of Global Optimization, vol. 2, pp. 363–386. Kluwer, Dordrecht (2013) Principe, J.C., Fancourt, C.L.: Artificial neural networks. In: Pardalos, P.M., Romejin, H.E. (eds.) Handbook of Global Optimization, vol. 2, pp. 363–386. Kluwer, Dordrecht (2013)
72.
go back to reference Ricker, N.L.: Nonlinear model predictive control of the tennessee eastman challenge process. Comput. Chem. Eng. 19(9), 961–981 (1995) Ricker, N.L.: Nonlinear model predictive control of the tennessee eastman challenge process. Comput. Chem. Eng. 19(9), 961–981 (1995)
73.
go back to reference Ricker, N.L.: Nonlinear modeling and state estimation of the tennessee eastman challenge process. Comput. Chem. Eng. 19(9), 983–1005 (1995) Ricker, N.L.: Nonlinear modeling and state estimation of the tennessee eastman challenge process. Comput. Chem. Eng. 19(9), 983–1005 (1995)
74.
go back to reference Ricker, N.L.: Tennessee Eastman challenge archive (2005) Ricker, N.L.: Tennessee Eastman challenge archive (2005)
75.
go back to reference Sanger, T.D.: Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Netw. 2(6), 459–473 (1989) Sanger, T.D.: Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Netw. 2(6), 459–473 (1989)
76.
go back to reference Seiffert, U.: Multiple layer perceptron training using genetic algorithms. In: ESANN, pp. 159–164. Citeseer (2001) Seiffert, U.: Multiple layer perceptron training using genetic algorithms. In: ESANN, pp. 159–164. Citeseer (2001)
77.
go back to reference Sheta, A., Al-Hiary, Heba, Braik, Malik: Identification and model predictive controller design of the Tennessee Eastman chemical process using ann. In: Proceedings of the 2009 International Conference on Artificial Intelligence (ICAI’09), July 13–16, USA, vol. 1, pp. 25–31 (2009) Sheta, A., Al-Hiary, Heba, Braik, Malik: Identification and model predictive controller design of the Tennessee Eastman chemical process using ann. In: Proceedings of the 2009 International Conference on Artificial Intelligence (ICAI’09), July 13–16, USA, vol. 1, pp. 25–31 (2009)
78.
go back to reference Sibi, P., Allwyn Jones, S., Siddarth, P.: Analysis of different activation functions using back propagation neural networks. J. Theor. Appl. Inf. Technol. 47(3), 1264–1268 (2013) Sibi, P., Allwyn Jones, S., Siddarth, P.: Analysis of different activation functions using back propagation neural networks. J. Theor. Appl. Inf. Technol. 47(3), 1264–1268 (2013)
79.
go back to reference Simon, D.: Biogeography-based optimization. IEEE Trans. Evol. Comput. 12(6), 702–713 (2008) Simon, D.: Biogeography-based optimization. IEEE Trans. Evol. Comput. 12(6), 702–713 (2008)
80.
go back to reference Sivagaminathan, R.K., Ramakrishnan, S.: A hybrid approach for feature subset selection using neural networks and ant colony optimization. Expert Syst. Appl. 33(1), 49–60 (2007) Sivagaminathan, R.K., Ramakrishnan, S.: A hybrid approach for feature subset selection using neural networks and ant colony optimization. Expert Syst. Appl. 33(1), 49–60 (2007)
81.
go back to reference Slowik, A., Bialko, M.: Training of artificial neural networks using differential evolution algorithm. In: 2008 Conference on Human System Interactions, pp. 60–65. IEEE (2008) Slowik, A., Bialko, M.: Training of artificial neural networks using differential evolution algorithm. In: 2008 Conference on Human System Interactions, pp. 60–65. IEEE (2008)
82.
go back to reference Socha, K., Blum, C.: An ant colony optimization algorithm for continuous optimization: application to feed-forward neural network training. Neural Comput. Appl. 16(3), 235–247 (2007) Socha, K., Blum, C.: An ant colony optimization algorithm for continuous optimization: application to feed-forward neural network training. Neural Comput. Appl. 16(3), 235–247 (2007)
83.
go back to reference Stanley, K.O.: Efficient reinforcement learning through evolving neural network topologies. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2002). Citeseer (2002) Stanley, K.O.: Efficient reinforcement learning through evolving neural network topologies. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2002). Citeseer (2002)
84.
go back to reference Subudhi, B., Jena, D.: Differential evolution and Levenberg Marquardt trained neural network scheme for nonlinear system identification. Neural Process. Lett. 27(3), 285–296 (2008) Subudhi, B., Jena, D.: Differential evolution and Levenberg Marquardt trained neural network scheme for nonlinear system identification. Neural Process. Lett. 27(3), 285–296 (2008)
85.
go back to reference Suykens, J.A.K., Vandewalle, J.P.L., de Moor, B.L.: Artificial Neural Networks for Modelling and Control of Non-linear Systems. Springer, Berlin (2012) Suykens, J.A.K., Vandewalle, J.P.L., de Moor, B.L.: Artificial Neural Networks for Modelling and Control of Non-linear Systems. Springer, Berlin (2012)
86.
go back to reference Valian, E., Mohanna, S., Tavakoli, S.: Improved cuckoo search algorithm for feedforward neural network training. Int. J. Artif. Intell. Appl. 2(3), 36–43 (2011) Valian, E., Mohanna, S., Tavakoli, S.: Improved cuckoo search algorithm for feedforward neural network training. Int. J. Artif. Intell. Appl. 2(3), 36–43 (2011)
87.
go back to reference van den Bergh, F., Engelbrecht, A.P., Engelbrecht, A.P.: Cooperative learning in neural networks using particle swarm optimizers. In: South African Computer Journal. Citeseer (2000) van den Bergh, F., Engelbrecht, A.P., Engelbrecht, A.P.: Cooperative learning in neural networks using particle swarm optimizers. In: South African Computer Journal. Citeseer (2000)
88.
go back to reference Wdaa, A.S.I.: Differential evolution for neural networks learning enhancement. PhD thesis, Universiti Teknologi Malaysia (2008) Wdaa, A.S.I.: Differential evolution for neural networks learning enhancement. PhD thesis, Universiti Teknologi Malaysia (2008)
89.
go back to reference Whitley, D., Starkweather, T., Bogart, C.: Genetic algorithms and neural networks: optimizing connections and connectivity. Parallel Comput. 14(3), 347–361 (1990) Whitley, D., Starkweather, T., Bogart, C.: Genetic algorithms and neural networks: optimizing connections and connectivity. Parallel Comput. 14(3), 347–361 (1990)
90.
go back to reference Wienholt, W.: Minimizing the system error in feedforward neural networks with evolution strategy. In: ICANN’93, pp. 490–493. Springer (1993) Wienholt, W.: Minimizing the system error in feedforward neural networks with evolution strategy. In: ICANN’93, pp. 490–493. Springer (1993)
91.
go back to reference Yamany, W., Fawzy, M., Tharwat, A., Hassanien, A.E.: Moth-flame optimization for training multi-layer perceptrons. In: 2015 11th International Computer Engineering Conference (ICENCO), pp. 267–272. IEEE (2015) Yamany, W., Fawzy, M., Tharwat, A., Hassanien, A.E.: Moth-flame optimization for training multi-layer perceptrons. In: 2015 11th International Computer Engineering Conference (ICENCO), pp. 267–272. IEEE (2015)
92.
go back to reference Yang, C.C., Prasher, S.O., Landry, J.A., DiTommaso, A.: Application of artificial neural networks in image recognition and classification of crop and weeds. Can. Agric. Eng. 42(3), 147–152 (2000) Yang, C.C., Prasher, S.O., Landry, J.A., DiTommaso, A.: Application of artificial neural networks in image recognition and classification of crop and weeds. Can. Agric. Eng. 42(3), 147–152 (2000)
93.
go back to reference Yang, Z., Hoseinzadeh, M., Andrews, A., Mayers, C., Evans, D.T., Bolt, R.T., Bhimani, J., Mi, N., Swanson, S.: Autotiering: automatic data placement manager in multi-tier all-flash datacenter. In: 2017 IEEE 36th International on Performance Computing and Communications Conference (IPCCC), pp. 1–8. IEEE (2017) Yang, Z., Hoseinzadeh, M., Andrews, A., Mayers, C., Evans, D.T., Bolt, R.T., Bhimani, J., Mi, N., Swanson, S.: Autotiering: automatic data placement manager in multi-tier all-flash datacenter. In: 2017 IEEE 36th International on Performance Computing and Communications Conference (IPCCC), pp. 1–8. IEEE (2017)
94.
go back to reference Yang, Z., Jia, D., Ioannidis, S., Mi, N., Sheng, B.: Intermediate data caching optimization for multi-stage and parallel big data frameworks. arXiv:1804.10563 (2018) Yang, Z., Jia, D., Ioannidis, S., Mi, N., Sheng, B.: Intermediate data caching optimization for multi-stage and parallel big data frameworks. arXiv:​1804.​10563 (2018)
95.
go back to reference Yao, X.: A review of evolutionary artificial neural networks. Int. J. Intell. Syst. 8(4), 539–567 (1993) Yao, X.: A review of evolutionary artificial neural networks. Int. J. Intell. Syst. 8(4), 539–567 (1993)
96.
go back to reference Yegnanarayana, B.: Artificial neural networks. PHI Learning Pvt. Ltd., New Delhi (2009) Yegnanarayana, B.: Artificial neural networks. PHI Learning Pvt. Ltd., New Delhi (2009)
97.
go back to reference Zhang, G.P.: Neural networks for classification: a survey. IEEE Trans. Syst. Man Cybern. C 30(4), 451–462 (2000)MathSciNet Zhang, G.P.: Neural networks for classification: a survey. IEEE Trans. Syst. Man Cybern. C 30(4), 451–462 (2000)MathSciNet
98.
go back to reference Zhang, N.: An online gradient method with momentum for two-layer feedforward neural networks. Appl. Math. Comput. 212(2), 488–498 (2009)MathSciNetMATH Zhang, N.: An online gradient method with momentum for two-layer feedforward neural networks. Appl. Math. Comput. 212(2), 488–498 (2009)MathSciNetMATH
99.
go back to reference Zhang, C., Shao, H., Li, Y.: Particle swarm optimisation for evolving artificial neural network. In: 2000 IEEE International Conference on Systems, Man, and Cybernetics, vol. 4, pp. 2487–2490. IEEE (2000) Zhang, C., Shao, H., Li, Y.: Particle swarm optimisation for evolving artificial neural network. In: 2000 IEEE International Conference on Systems, Man, and Cybernetics, vol. 4, pp. 2487–2490. IEEE (2000)
100.
go back to reference Zhang, J., Sanderson, A.C.: Jade: adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 13(5), 945–958 (2009) Zhang, J., Sanderson, A.C.: Jade: adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 13(5), 945–958 (2009)
101.
go back to reference Zhang, J.-R., Zhang, J., Lok, T.-M., Lyu, M.R.: A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training. Appl. Math. Comput. 185(2), 1026–1037 (2007)MATH Zhang, J.-R., Zhang, J., Lok, T.-M., Lyu, M.R.: A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training. Appl. Math. Comput. 185(2), 1026–1037 (2007)MATH
Metadata
Title
Evolving neural networks using bird swarm algorithm for data classification and regression applications
Authors
Ibrahim Aljarah
Hossam Faris
Seyedali Mirjalili
Nailah Al-Madi
Alaa Sheta
Majdi Mafarja
Publication date
15-02-2019
Publisher
Springer US
Published in
Cluster Computing / Issue 4/2019
Print ISSN: 1386-7857
Electronic ISSN: 1573-7543
DOI
https://doi.org/10.1007/s10586-019-02913-5

Other articles of this Issue 4/2019

Cluster Computing 4/2019 Go to the issue

Premium Partner