ABSTRACT
Generative Adversarial Networks (GANs) are popular tools for generative modeling. The dynamics of their adversarial learning give rise to convergence pathologies during training such as mode and discriminator collapse. In machine learning, ensembles of predictors demonstrate better results than a single predictor for many tasks. In this study, we apply two evolutionary algorithms (EAs) to create ensembles to re-purpose generative models, i.e., given a set of heterogeneous generators that were optimized for one objective (e.g., minimize Fréchet Inception Distance), create ensembles of them for optimizing a different objective (e.g., maximize the diversity of the generated samples). The first method is restricted by the exact size of the ensemble and the second method only restricts the upper bound of the ensemble size. Experimental analysis on the MNIST image benchmark demonstrates that both EA ensembles creation methods can re-purpose the models, without reducing their original functionality. The EA-based demonstrate significantly better performance compared to other heuristic-based methods. When comparing both evolutionary, the one with only an upper size bound on the ensemble size is the best.
- Hussein A Abbass. 2003. Pareto neuro-evolution: Constructing ensemble of neural networks using multi-objective optimization. In The 2003 Congress on Evolutionary Computation, 2003. CEC'03., Vol. 3. IEEE, 2074--2080.Google ScholarCross Ref
- Abdulla Amin Aburomman and Mamun Bin Ibne Reaz. 2016. A novel SVM-kNN-PSO ensemble method for intrusion detection system. Applied Soft Computing 38 (2016), 360--372.Google ScholarDigital Library
- Abdullah Al-Dujaili, Tom Schmiedlechner, Erik Hemberg, and Una-May O'Reilly. 2018. Towards distributed coevolutionary GANs. In AAAI 2018 Fall Symposium.Google Scholar
- Harith Al-Sahaf, Ying Bi, Qi Chen, Andrew Lensen, Yi Mei, Yanan Sun, Binh Tran, Bing Xue, and Mengjie Zhang. 2019. A survey on evolutionary machine learning. Journal of the Royal Society of New Zealand 49, 2 (2019), 205--228.Google ScholarCross Ref
- Stamatios-Aggelos N. Alexandropoulos, Christos K. Aridas, Sotiris B. Kotsiantis, and Michael N. Vrahatis. 2019. Multi-Objective Evolutionary Optimization Algorithms for Machine Learning: A Recent Survey. Springer International Publishing, Cham, 35--55.Google Scholar
- Martin Arjovsky and Léon Bottou. 2017. Towards Principled Methods for Training Generative Adversarial Networks. arXiv e-prints, art. arXiv preprint arXiv:1701.04862 (2017).Google Scholar
- Sanjeev Arora, Andrej Risteski, and Yi Zhang. 2018. Do GANs learn the distribution? Some Theory and Empirics. In International Conference on Learning Representations. https://openreview.net/forum?id=BJehNfW0-Google Scholar
- Urvesh Bhowan, Mark Johnston, and Mengjie Zhang. 2011. Evolving Ensembles in Multi-Objective Genetic Programming for Classification with Unbalanced Data. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation (GECCO '11). Association for Computing Machinery, New York, NY, USA, 1331--1338.Google ScholarDigital Library
- Ying Bi, Bing Xue, and Mengjie Zhang. 2019. An Automated Ensemble Learning Framework Using Genetic Programming for Image Classification. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO '19). Association for Computing Machinery, New York, NY, USA, 365--373. Google ScholarDigital Library
- Ali Borji. 2019. Pros and cons of gan evaluation measures. Computer Vision and Image Understanding 179 (2019), 41--65.Google ScholarDigital Library
- Arjun Chandra and Xin Yao. 2006. Ensemble learning using multi-objective evolutionary algorithms. Journal of Mathematical Modelling and Algorithms 5, 4 (2006), 417--445.Google ScholarCross Ref
- Y. Chen, Y. Yang, W. Wang, and C. J. Kuo. 2019. Ensembles of Feedforward-Designed Convolutional Neural Networks. In 2019 IEEE International Conference on Image Processing (ICIP). 3796--3800.Google Scholar
- Soumith Chintala, Emily Denton, Martin Arjovsky, and Michael Mathieu. 2016. How to Train a GAN? Tips and tricks to make GANs work. https://github.com/soumith/ganhacks. (2016).Google Scholar
- Grant Dick, Caitlin A Owen, and Peter A Whigham. 2018. Evolving bagging ensembles using a spatially-structured niching method. In Proceedings of the Genetic and Evolutionary Computation Conference. 418--425.Google ScholarDigital Library
- Thomas G. Dietterich. 2000. Ensemble Methods in Machine Learning. In Multiple Classifier Systems. Springer Berlin Heidelberg, Berlin, Heidelberg, 1--15.Google ScholarCross Ref
- Ishan Durugkar, Ian Gemp, and Sridhar Mahadevan. 2016. Generative multi-adversarial networks. arXiv preprint arXiv:1611.01673 (2016).Google Scholar
- E. Fernandes, A. C. P. d. L. F. De Carvalho, and X. Yao. 2019. Ensemble of Classifiers based on MultiObjective Genetic Sampling for Imbalanced Data. IEEE Transactions on Knowledge and Data Engineering (2019), 1--1. Google ScholarCross Ref
- Jeannie Fitzgerald, R Muhammad Atif Azad, and Conor Ryan. 2013. A bootstrapping approach to reduce over-fitting in genetic programming. In Proceedings of the 15th annual conference companion on Genetic and evolutionary computation. 1113--1120.Google ScholarDigital Library
- Gianluigi Folino, Clara Pizzuti, and Giandomenico Spezzano. 2003. Ensemble techniques for parallel genetic programming based classifiers. In European Conference on Genetic Programming. Springer, 59--69.Google ScholarDigital Library
- Gianluigi Folino, Clara Pizzuti, and Giandomenico Spezzano. 2006. GP ensembles for large-scale data classification. IEEE Transactions on Evolutionary Computation 10, 5 (2006), 604--616.Google ScholarDigital Library
- João Gama and Pavel Brazdil. 2000. Cascade generalization. Machine learning 41, 3 (2000), 315--343.Google Scholar
- Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. 2672--2680.Google Scholar
- Aditya Grover and Stefano Ermon. 2018. Boosted generative models. In Thirty-Second AAAI Conference on Artificial Intelligence.Google ScholarCross Ref
- Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. 2017. Improved training of wasserstein gans. In Advances in neural information processing systems. 5767--5777.Google Scholar
- Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, and Bernhard Nessler. 2017. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. arXiv preprint arXiv:1706.08500 (2017).Google Scholar
- Quan Hoang, Tu Dinh Nguyen, Trung Le, and Dinh Phung. 2017. Multi-generator generative adversarial nets. arXiv preprint arXiv:1708.02556 (2017).Google Scholar
- Hitoshi Iba. 1999. Bagging, boosting, and bloating in genetic programming. In Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation-Volume 2. 1053--1060.Google Scholar
- Md M Islam, Xin Yao, and Kazuyuki Murase. 2003. A constructive algorithm for training cooperative neural network ensembles. IEEE Transactions on neural networks 14, 4 (2003), 820--834.Google ScholarDigital Library
- Yaochu Jin. 2006. Multi-objective machine learning. Vol. 16. Springer Science & Business Media.Google Scholar
- Jinhan Kim and Shin Yoo. 2019. Software review: DEAP (Distributed Evolutionary Algorithm in Python) library. Genetic Programming and Evolvable Machines 20, 1 (2019), 139--142.Google ScholarDigital Library
- Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097--1105.Google Scholar
- William B Langdon, SJ Barrett, and Bernard F Buxton. 2002. Combining decision trees and neural networks for drug discovery. In European Conference on Genetic Programming. Springer, 60--70.Google ScholarCross Ref
- Celio H. N. Larcher and Helio J. C. Barbosa. 2019. Auto-CVE: A Coevolutionary Approach to Evolve Ensembles in Automated Machine Learning. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO '19). Association for Computing Machinery, New York, NY, USA, 392--400. Google ScholarDigital Library
- Yann LeCun. 1998. The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/ (1998).Google Scholar
- Chengtao Li, David Alvarez-Melis, Keyulu Xu, Stefanie Jegelka, and Suvrit Sra. 2017. Distributional Adversarial Networks. arXiv preprint arXiv:1706.09549 (2017).Google Scholar
- Aranildo R Lima, Alex J Cannon, and William W Hsieh. 2013. Nonlinear regression in environmental sciences by support vector machines combined with evolutionary strategy. Computers & Geosciences 50 (2013), 136--144.Google ScholarDigital Library
- Yong Liu, Xin Yao, and Tetsuya Higuchi. 2000. Evolutionary ensembles with negative correlation learning. IEEE Transactions on Evolutionary Computation 4, 4 (2000), 380--387.Google ScholarDigital Library
- Ilya Loshchilov. 2013. Surrogate-assisted evolutionary algorithms. Ph.D. Dissertation. University Paris South Paris XI; National Institute for Research in Computer Science and Automatic-INRIA.Google Scholar
- Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. 2017. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision. 2794--2802.Google ScholarCross Ref
- Goran Mauša and Tihana Galinac Grbac. 2017. Co-evolutionary multi-population genetic programming for classification in software defect prediction: an empirical case study. Applied soft computing 55 (2017), 331--351.Google Scholar
- QN Meng and Xun Xu. 2018. Price forecasting using an ACO-based support vector regression ensemble in cloud manufacturing. Computers & Industrial Engineering 125 (2018), 171--177.Google ScholarCross Ref
- Tu Nguyen, Trung Le, Hung Vu, and Dinh Phung. 2017. Dual discriminator generative adversarial nets. In Advances in Neural Information Processing Systems. 2670--2680.Google Scholar
- Carlos Alberto de Araújo Padilha, Dante Augusto Couto Barone, and Adrião Duarte Dória Neto. 2016. A Multi-Level Approach Using Genetic Algorithms in an Ensemble of Least Squares Support Vector Machines. Know.-Based Syst. 106, C (2016), 85--95.Google Scholar
- Zhaoqing Pan, Weijie Yu, Xiaokai Yi, Asifullah Khan, Feng Yuan, and Yuhui Zheng. 2019. Recent progress on generative adversarial networks (GANs): A survey. IEEE Access 7 (2019), 36322--36333.Google ScholarCross Ref
- Paweł Pławiak, Moloud Abdar, and U Rajendra Acharya. 2019. Application of new deep genetic cascade ensemble of SVM classifiers to predict the Australian credit scoring. Applied Soft Computing 84 (2019), 105740.Google ScholarCross Ref
- Martha Pulido, Patricia Melin, and Oscar Castillo. 2016. Design of Ensemble Neural Networks for Predicting the US Dollar/MX Time Series with Particle Swarm Optimization. In Recent Developments and New Direction in Soft-Computing Foundations and Applications. Springer, 317--329.Google Scholar
- Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv preprint arXiv:1511.06434 (2015).Google Scholar
- Victor Henrique Alves Ribeiro and Gilberto Reynoso-Meza. 2018. A multiobjective optimization design framework for ensemble generation. In Proceedings of the Genetic and Evolutionary Computation Conference Companion. 1882--1885.Google ScholarDigital Library
- Shamim Ripon, Md Golam Sarowar, Fahima Qasim, and Shamse Tasnim Cynthia. 2020. An Efficient Classification of Tuberous Sclerosis Disease Using Nature Inspired PSO and ACO Based Optimized Neural Network. In Nature Inspired Computing for Data Science. Springer, 1--28.Google Scholar
- Mehdi SM Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly. 2018. Assessing generative models via precision and recall. In Advances in Neural Information Processing Systems. 5228--5237.Google Scholar
- Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved techniques for training gans. In Advances in neural information processing systems. 2234--2242.Google Scholar
- Robert E Schapire. 1990. The strength of weak learnability. Machine learning 5, 2 (1990), 197--227.Google Scholar
- Tom Schmiedlechner, Ignavier Ng Zhi Yong, Abdullah Al-Dujaili, Erik Hemberg, and Una-May O'Reilly. 2018. Lipizzaner: A System That Scales Robust Generative Adversarial Network Training. In the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018) Workshop on Systems for ML and Open Source Software.Google Scholar
- Jeongju Sohn and Shin Yoo. 2019. Why Train-and-Select When You Can Use Them All? Ensemble Model for Fault Localisation. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO '19). Association for Computing Machinery, New York, NY, USA, 1408--1416. Google ScholarDigital Library
- Ilya O Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, and Bernhard Schölkopf. 2017. AdaGAN: Boosting generative models. In Advances in Neural Information Processing Systems. 5424--5433.Google Scholar
- Jamal Toutouh, Erik Hemberg, and Una-May O'Reilly. 2019. Spatial Evolutionary Generative Adversarial Networks. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO '19). ACM, New York, NY, USA, 472--480. Google ScholarDigital Library
- Cao Truong Tran, Mengjie Zhang, Bing Xue, and Peter Andreae. 2018. Genetic Programming with Interval Functions and Ensemble Learning for Classification with Incomplete Data. In Australasian Joint Conference on Artificial Intelligence. Springer, 577--589.Google Scholar
- Yaxing Wang, Lichao Zhang, and Joost van de Weijer. 2016. Ensembles of generative adversarial networks. arXiv preprint arXiv:1612.00991 (2016).Google Scholar
- Zhengwei Wang, Qi She, and Tomas E Ward. 2019. Generative adversarial networks: A survey and taxonomy. arXiv preprint arXiv:1906.01529 (2019).Google Scholar
- David H Wolpert. 1992. Stacked generalization. Neural networks 5, 2 (1992), 241--259.Google Scholar
- Xian Wu, Kun Xu, and Peter Hall. 2017. A survey of image synthesis and editing with generative adversarial networks. Tsinghua Science and Technology 22, 6 (2017), 660--674.Google ScholarCross Ref
- C. Zhang, P. Lim, A. K. Qin, and K. C. Tan. 2017. Multiobjective Deep Belief Networks Ensemble for Remaining Useful Life Estimation in Prognostics. IEEE Transactions on Neural Networks and Learning Systems 28, 10 (Oct 2017), 2306--2318. Google ScholarCross Ref
- Cha Zhang and Yunqian Ma. 2012. Ensemble machine learning: methods and applications. Springer.Google Scholar
- Zhi-Hua Zhou, Jianxin Wu, and Wei Tang. 2002. Ensembling neural networks: many could be better than all. Artificial intelligence 137, 1-2 (2002), 239--263.Google Scholar
Index Terms
- Re-purposing heterogeneous generative ensembles with evolutionary computation
Recommendations
Spatial evolutionary generative adversarial networks
GECCO '19: Proceedings of the Genetic and Evolutionary Computation ConferenceGenerative adversary networks (GANs) suffer from training pathologies such as instability and mode collapse. These pathologies mainly arise from a lack of diversity in their adversarial interactions. Evolutionary generative adversarial networks apply ...
Signal propagation in a gradient-based and evolutionary learning system
GECCO '21: Proceedings of the Genetic and Evolutionary Computation ConferenceGenerative adversarial networks (GANs) exhibit training pathologies that can lead to convergence-related degenerative behaviors, whereas spatially-distributed, coevolutionary algorithms (CEAs) for GAN training, e.g. Lipizzaner, are empirically robust to ...
Ecological theory provides insights about evolutionary computation
GECCO '18: Proceedings of the Genetic and Evolutionary Computation Conference CompanionPromoting diversity in an evolving population is important for Evolutionary Computation (EC) because it reduces premature convergence on suboptimal fitness peaks while still encouraging both exploration and exploitation [3]. However, some types of ...
Comments