Skip to main content

2020 | OriginalPaper | Buchkapitel

Interpreting Galaxy Deblender GAN from the Discriminator’s Perspective

verfasst von : Heyi Li, Yuewei Lin, Klaus Mueller, Wei Xu

Erschienen in: Advances in Visual Computing

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In large galaxy surveys it can be difficult to separate overlapping galaxies, a process called deblending. Generative adversarial networks (GANs) have shown great potential in addressing this fundamental problem. However, it remains a significant challenge to comprehend how the network works, which is particularly difficult for non-expert users. This research focuses on understanding the behaviors of one of the network’s major components, the Discriminator, which plays a vital role but is often overlooked. Specifically, we propose an enhanced Layer-wise Relevance Propagation (LRP) algorithm called Polarized-LRP. It generates a heatmap-based visualization highlighting the area in the input image that contributes to the network decision. It consists of two parts i.e. a positive contribution heatmap for the images classified as ground truth and a negative contribution heatmap for the ones classified as generated. As a use case, we have chosen the deblending of two overlapping galaxy images via a branched GAN model. Using the Galaxy Zoo dataset we demonstrate that our method clearly reveals the attention areas of the Discriminator to differentiate generated galaxy images from ground truth images, and outperforms the original LRP method. To connect the Discriminator’s impact on the Generator, we also visualize the attention shift of the Generator across the training process. An interesting result we have achieved is the detection of a problematic data augmentation procedure that would else have remained hidden. We find that our proposed method serves as a useful visual analytical tool for more effective training and a deeper understanding of GAN models.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Alber, M., et al.: Innvestigate neural networks!. J. Mach. Learn. Res. 20(93), 1–8 (2019)MathSciNet Alber, M., et al.: Innvestigate neural networks!. J. Mach. Learn. Res. 20(93), 1–8 (2019)MathSciNet
2.
Zurück zum Zitat Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International Conference on Machine Learning, pp. 214–223 (2017) Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International Conference on Machine Learning, pp. 214–223 (2017)
3.
Zurück zum Zitat Bau, D., et al.: GAN dissection: visualizing and understanding generative adversarial networks. In: International Conference on Learning Representations (2019) Bau, D., et al.: GAN dissection: visualizing and understanding generative adversarial networks. In: International Conference on Learning Representations (2019)
4.
Zurück zum Zitat Dawson, W.A., Schneider, M.D., Tyson, J.A., Jee, M.J.: The ellipticity distribution of ambiguously blended objects. Astrophys. J. 816(1), 11 (2015)CrossRef Dawson, W.A., Schneider, M.D., Tyson, J.A., Jee, M.J.: The ellipticity distribution of ambiguously blended objects. Astrophys. J. 816(1), 11 (2015)CrossRef
5.
Zurück zum Zitat Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014) Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)
6.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
7.
Zurück zum Zitat Ivezić, Ž., et al.: LSST: from science drivers to reference design and anticipated data products. Astrophys. J. 873(2), 111 (2019)CrossRef Ivezić, Ž., et al.: LSST: from science drivers to reference design and anticipated data products. Astrophys. J. 873(2), 111 (2019)CrossRef
8.
Zurück zum Zitat Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. In: International Conference on Learning Representations (2018) Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. In: International Conference on Learning Representations (2018)
9.
Zurück zum Zitat Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410 (2019) Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410 (2019)
10.
Zurück zum Zitat Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (2015) Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (2015)
11.
Zurück zum Zitat Lapuschkin, S., Binder, A., Montavon, G., Müller, K.R., Samek, W.: The lrp toolbox for artificial neural networks. J. Mach. Learn. Res. 17(1), 3938–3942 (2016)MathSciNetMATH Lapuschkin, S., Binder, A., Montavon, G., Müller, K.R., Samek, W.: The lrp toolbox for artificial neural networks. J. Mach. Learn. Res. 17(1), 3938–3942 (2016)MathSciNetMATH
12.
Zurück zum Zitat Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017) Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017)
13.
Zurück zum Zitat Li, H., Tian, Y., Mueller, K., Chen, X.: Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation. Image Vis. Comput. 83, 70–86 (2019)CrossRef Li, H., Tian, Y., Mueller, K., Chen, X.: Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation. Image Vis. Comput. 83, 70–86 (2019)CrossRef
14.
Zurück zum Zitat Lintott, C., et al.: Galaxy zoo 1: data release of morphological classifications for nearly 900 000 galaxies. Mon. Not. R. Astron. Soc. 410(1), 166–178 (2010)CrossRef Lintott, C., et al.: Galaxy zoo 1: data release of morphological classifications for nearly 900 000 galaxies. Mon. Not. R. Astron. Soc. 410(1), 166–178 (2010)CrossRef
15.
Zurück zum Zitat Liu, M., Shi, J., Cao, K., Zhu, J., Liu, S.: Analyzing the training processes of deep generative models. IEEE Trans. Visual Comput. Graph. 24(1), 77–87 (2017)CrossRef Liu, M., Shi, J., Cao, K., Zhu, J., Liu, S.: Analyzing the training processes of deep generative models. IEEE Trans. Visual Comput. Graph. 24(1), 77–87 (2017)CrossRef
16.
Zurück zum Zitat Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 193–209. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_10CrossRef Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 193–209. Springer, Cham (2019). https://​doi.​org/​10.​1007/​978-3-030-28954-6_​10CrossRef
17.
Zurück zum Zitat Radford, A., Metz, L., Chintala, S.: Progressive growing of GANs for improved quality, stability, and variation. In: International Conference on Learning Representations (2016) Radford, A., Metz, L., Chintala, S.: Progressive growing of GANs for improved quality, stability, and variation. In: International Conference on Learning Representations (2016)
18.
Zurück zum Zitat Reiman, D.M., Göhre, B.E.: Deblending galaxy superpositions with branched generative adversarial networks. Mon. Not. R. Astron. Soc. 485(2), 2617–2627 (2019)CrossRef Reiman, D.M., Göhre, B.E.: Deblending galaxy superpositions with branched generative adversarial networks. Mon. Not. R. Astron. Soc. 485(2), 2617–2627 (2019)CrossRef
19.
Zurück zum Zitat Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017) Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)
20.
Zurück zum Zitat Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016) Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016)
Metadaten
Titel
Interpreting Galaxy Deblender GAN from the Discriminator’s Perspective
verfasst von
Heyi Li
Yuewei Lin
Klaus Mueller
Wei Xu
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-64559-5_18

Premium Partner