Skip to main content

2018 | OriginalPaper | Buchkapitel

A New Family of Generative Adversarial Nets Using Heterogeneous Noise to Model Complex Distributions

verfasst von : Ancheng Lin, Jun Li, Lujuan Zhang, Lei Shi, Zhenyuan Ma

Erschienen in: AI 2018: Advances in Artificial Intelligence

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Generative adversarial nets (GANs) are effective framework for constructing data models and enjoys desirable theoretical justification. On the other hand, realizing GANs for practical complex data distribution often requires careful configuration of the generator, discriminator, objective function and training method and can involve much non-trivial effort.
We propose an novel family of generative adversarial nets (GANs), where we employ both continuous noise and random binary codes in the generating process. The binary codes in the new GAN model (named BGANs) play the role of categorical latent variables helps improve the model capability and training stability when dealing with complex data distributions. BGAN has been evaluated and compared with existing GANs trained with the state-of-the-art method on both synthetic and practical data. The empirical evaluation shows effectiveness of BGAN.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Breuleux, O., Bengio, Y., Vincent, P.: Unlearning for better mixing. Technical Report 1349, Université de Montréal/DIRO (2010) Breuleux, O., Bengio, Y., Vincent, P.: Unlearning for better mixing. Technical Report 1349, Université de Montréal/DIRO (2010)
2.
Zurück zum Zitat Breuleux, O., Bengio, Y., Vincent, P.: Quickly generating representative samples from an RBM-derived process. Neural Comput. 23(8), 2058–2073 (2011)MathSciNetCrossRef Breuleux, O., Bengio, Y., Vincent, P.: Quickly generating representative samples from an RBM-derived process. Neural Comput. 23(8), 2058–2073 (2011)MathSciNetCrossRef
3.
Zurück zum Zitat Choi, Y., Choi, M., Kim, M., Ha, J.-W., Kim, S., Choo, J.: Stargan: unified generative adversarial networks for multi-domain image-to-image translation. arXiv preprint arXiv:1711.09020 (2017) Choi, Y., Choi, M., Kim, M., Ha, J.-W., Kim, S., Choo, J.: Stargan: unified generative adversarial networks for multi-domain image-to-image translation. arXiv preprint arXiv:​1711.​09020 (2017)
4.
Zurück zum Zitat Denton, E.L., Chintala, S., Fergus, R., et al.: Deep generative image models using a laplacian pyramid of adversarial networks, pp. 1486–1494 (2015) Denton, E.L., Chintala, S., Fergus, R., et al.: Deep generative image models using a laplacian pyramid of adversarial networks, pp. 1486–1494 (2015)
5.
Zurück zum Zitat Garimella, S., Hermansky, H.: Factor analysis of auto-associative neural networks with application in speaker verification. IEEE Trans. Neural Netw. Learn. Syst. 24(4), 522–528 (2013)CrossRef Garimella, S., Hermansky, H.: Factor analysis of auto-associative neural networks with application in speaker verification. IEEE Trans. Neural Netw. Learn. Syst. 24(4), 522–528 (2013)CrossRef
6.
Zurück zum Zitat Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style. CoRR, abs/1508.06576 (2015) Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style. CoRR, abs/1508.06576 (2015)
7.
Zurück zum Zitat Goodfellow, I.J.: NIPS 2016 tutorial: generative adversarial networks. CoRR, abs/1701.00160 (2017) Goodfellow, I.J.: NIPS 2016 tutorial: generative adversarial networks. CoRR, abs/1701.00160 (2017)
8.
Zurück zum Zitat Goodfellow, I.J., Mirza, M., Courville, A.C., Bengio, Y.: Multi-prediction deep Boltzmann machines. In: NIPS, pp. 548–556 (2013) Goodfellow, I.J., Mirza, M., Courville, A.C., Bengio, Y.: Multi-prediction deep Boltzmann machines. In: NIPS, pp. 548–556 (2013)
9.
Zurück zum Zitat Goodfellow, I.J., et al.: Generative adversarial networks. CoRR, abs/1406.2661 (2014) Goodfellow, I.J., et al.: Generative adversarial networks. CoRR, abs/1406.2661 (2014)
10.
Zurück zum Zitat Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of wasserstein gans. In: NIPS, pp. 5769–5779 (2017) Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of wasserstein gans. In: NIPS, pp. 5769–5779 (2017)
11.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)
12.
Zurück zum Zitat Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)MathSciNetCrossRef Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)MathSciNetCrossRef
13.
Zurück zum Zitat Hoang, Q., Nguyen, T.D., Le, T., Phung, D.: MGAN: training generative adversarial nets with multiple generators. In: ICLR, p. 24 (2018) Hoang, Q., Nguyen, T.D., Le, T., Phung, D.: MGAN: training generative adversarial nets with multiple generators. In: ICLR, p. 24 (2018)
14.
Zurück zum Zitat Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR, pp. 1125–1134 (2017) Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR, pp. 1125–1134 (2017)
15.
Zurück zum Zitat Khosrowabadi, R., Quek, C., Ang, K.K., Wahab, A.: ERNN: a biologically inspired feedforward neural network to discriminate emotion from EEG signal. IEEE Trans. Neural Netw. Learn. Syst. 25(3), 609–620 (2014)CrossRef Khosrowabadi, R., Quek, C., Ang, K.K., Wahab, A.: ERNN: a biologically inspired feedforward neural network to discriminate emotion from EEG signal. IEEE Trans. Neural Netw. Learn. Syst. 25(3), 609–620 (2014)CrossRef
16.
Zurück zum Zitat Kingma, D.P., Welling, M.: Auto-encoding variational bayes. CoRR, abs/1312.6114 (2013) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. CoRR, abs/1312.6114 (2013)
17.
Zurück zum Zitat Krizhevsky, A., Nair, V., Hinton, G.: The CIFAR-10 dataset (2014) Krizhevsky, A., Nair, V., Hinton, G.: The CIFAR-10 dataset (2014)
18.
Zurück zum Zitat LeCun, Y., Bengio, Y., Hinton, G.E.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRef LeCun, Y., Bengio, Y., Hinton, G.E.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRef
19.
Zurück zum Zitat LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRef LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRef
20.
Zurück zum Zitat Liu, M.Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: NIPS, pp. 700–708 (2017) Liu, M.Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: NIPS, pp. 700–708 (2017)
23.
Zurück zum Zitat Mirza, M., Osindero, S.: Conditional generative adversarial nets. CoRR, abs/1411.1784 (2014) Mirza, M., Osindero, S.: Conditional generative adversarial nets. CoRR, abs/1411.1784 (2014)
24.
Zurück zum Zitat Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: ICML, pp. 807–814 (2010) Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: ICML, pp. 807–814 (2010)
25.
Zurück zum Zitat Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier GANs. In: ICML, pp. 2642–2651 (2017) Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier GANs. In: ICML, pp. 2642–2651 (2017)
26.
Zurück zum Zitat Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434 (2015) Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434 (2015)
27.
Zurück zum Zitat Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adversarial text to image synthesis. In: ICML, pp. 1060–1069 (2016) Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adversarial text to image synthesis. In: ICML, pp. 1060–1069 (2016)
28.
Zurück zum Zitat Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS, pp. 91–99 (2015) Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS, pp. 91–99 (2015)
29.
Zurück zum Zitat Schwarz, B., Richardson, M.: Using a de-convolution window for operating modal analysis. In: Proceedings of the IMAC. Citeseer (2007) Schwarz, B., Richardson, M.: Using a de-convolution window for operating modal analysis. In: Proceedings of the IMAC. Citeseer (2007)
30.
Zurück zum Zitat Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 (2014) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 (2014)
31.
Zurück zum Zitat Song, J., He, T., Gao, L., Xu, X., Hanjalic, A., Shen, H.T.: Binary generative adversarial networks for image retrieval. In: AAAI (2018) Song, J., He, T., Gao, L., Xu, X., Hanjalic, A., Shen, H.T.: Binary generative adversarial networks for image retrieval. In: AAAI (2018)
32.
Zurück zum Zitat Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetMATH Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetMATH
33.
Zurück zum Zitat Sutskever, I., Hinton, G.E., Taylor, G.W.: The recurrent temporal restricted Boltzmann machine. In: NIPS, pp. 1601–1608 (2008) Sutskever, I., Hinton, G.E., Taylor, G.W.: The recurrent temporal restricted Boltzmann machine. In: NIPS, pp. 1601–1608 (2008)
34.
Zurück zum Zitat Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS, pp. 3104–3112 (2014) Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS, pp. 3104–3112 (2014)
35.
Zurück zum Zitat Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI, pp. 4278–4284 (2017) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI, pp. 4278–4284 (2017)
36.
Zurück zum Zitat Cortes, C., LeCun, Y., Burges, C.J.C.: The MNIST database of handwritten digits (1998) Cortes, C., LeCun, Y., Burges, C.J.C.: The MNIST database of handwritten digits (1998)
37.
Zurück zum Zitat Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV (2017) Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV (2017)
Metadaten
Titel
A New Family of Generative Adversarial Nets Using Heterogeneous Noise to Model Complex Distributions
verfasst von
Ancheng Lin
Jun Li
Lujuan Zhang
Lei Shi
Zhenyuan Ma
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-030-03991-2_63

Premium Partner