Skip to main content

2020 | OriginalPaper | Buchkapitel

Context-Aware Residual Network with Promotion Gates for Single Image Super-Resolution

verfasst von : Xiaozhong Ji, Yirui Wu, Tong Lu

Erschienen in: MultiMedia Modeling

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Deep learning models have achieved significant success in quantities of vision-based applications. However, directly applying deep structures to perform single image super-resolution (SISR) results in poor visual effects such as blurry patches and loss in details, which are caused by the fact that low-frequency information is treated equally and ambiguously across different patches and channels. To ease this problem, we propose a novel context-aware deep residual network with promotion gates, named as G-CASR network, for SISR. In the proposed G-CASR network, a sequence of G-CASR modules is cascaded to transform low-resolution features to high informative features. In each G-CASR module, we also design a dual-attention residual block (DRB) to capture abundant and variant context information by dually connecting spatial and channel attention scheme. To improve the informative ability of extracted context information, a promotion gate (PG) is further applied to analyze inherent characteristics of input data at each module, thus offering insight for how to enhance contributive information and suppress useless information. Experiments on five public datasets consisting of Set5, Set14, B100, Urban100 and Manga109 show that the proposed G-CASR has achieved averagely 1.112/0.0255 improvement for PSNR/SSIM measurements comparing with the recent methods including SRCNN, VDSR, lapSRN and EDSR. Simultaneously, the proposed G-CASR requires only about 25% memory cost comparing with EDSR.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Bevilacqua, M., Roumy, A., Guillemot, C., Alberi-Morel, M.L.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: Proceedings of BMVC (2012) Bevilacqua, M., Roumy, A., Guillemot, C., Alberi-Morel, M.L.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: Proceedings of BMVC (2012)
2.
Zurück zum Zitat Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014) Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:​1406.​1078 (2014)
4.
Zurück zum Zitat Fujimoto, A., Ogawa, T., Yamamoto, K., Matsui, Y., Yamasaki, T., Aizawa, K.: Manga109 dataset and creation of metadata. In: Proceedings of the 1st International Workshop on coMics ANalysis, Processing and Understanding, p. 2 (2016) Fujimoto, A., Ogawa, T., Yamamoto, K., Matsui, Y., Yamasaki, T., Aizawa, K.: Manga109 dataset and creation of metadata. In: Proceedings of the 1st International Workshop on coMics ANalysis, Processing and Understanding, p. 2 (2016)
5.
Zurück zum Zitat Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRef Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRef
6.
Zurück zum Zitat Hu, Y., Li, J., Huang, Y., Gao, X.: Channel-wise and spatial feature modulation network for single image super-resolution. arXiv preprint arXiv:1809.11130 (2018) Hu, Y., Li, J., Huang, Y., Gao, X.: Channel-wise and spatial feature modulation network for single image super-resolution. arXiv preprint arXiv:​1809.​11130 (2018)
7.
Zurück zum Zitat Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: Proceedings of CVPR, pp. 5197–5206 (2015) Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: Proceedings of CVPR, pp. 5197–5206 (2015)
8.
Zurück zum Zitat Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of CVPR, pp. 1646–1654 (2016) Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of CVPR, pp. 1646–1654 (2016)
9.
Zurück zum Zitat Kim, J.H., Choi, J.H., Cheon, M., Lee, J.S.: Ram: residual attention module for single image super-resolution. arXiv preprint arXiv:1811.12043 (2018) Kim, J.H., Choi, J.H., Cheon, M., Lee, J.S.: Ram: residual attention module for single image super-resolution. arXiv preprint arXiv:​1811.​12043 (2018)
10.
Zurück zum Zitat Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep Laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of CVPR (2017) Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep Laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of CVPR (2017)
11.
Zurück zum Zitat Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: Proceedings of CVPR, vol. 1, p. 4 (2017) Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: Proceedings of CVPR, vol. 1, p. 4 (2017)
12.
Zurück zum Zitat Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of ICCV, vol. 2, pp. 416–423 (2001) Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of ICCV, vol. 2, pp. 416–423 (2001)
13.
Zurück zum Zitat Timofte, R., Agustsson, E., Van Gool, L., Yang, M.H., Zhang, L.: NTIRE 2017 challenge on single image super-resolution: methods and results. In: Proceedings of Computer Vision and Pattern Recognition Workshops, pp. 114–125 (2017) Timofte, R., Agustsson, E., Van Gool, L., Yang, M.H., Zhang, L.: NTIRE 2017 challenge on single image super-resolution: methods and results. In: Proceedings of Computer Vision and Pattern Recognition Workshops, pp. 114–125 (2017)
15.
Zurück zum Zitat Tong, T., Li, G., Liu, X., Gao, Q.: Image super-resolution using dense skip connections. In: Proceedings of ICCV, pp. 4809–4817 (2017) Tong, T., Li, G., Liu, X., Gao, Q.: Image super-resolution using dense skip connections. In: Proceedings of ICCV, pp. 4809–4817 (2017)
Metadaten
Titel
Context-Aware Residual Network with Promotion Gates for Single Image Super-Resolution
verfasst von
Xiaozhong Ji
Yirui Wu
Tong Lu
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-37734-2_12

Neuer Inhalt