Skip to main content
Erschienen in: International Journal of Computer Vision 5/2021

02.03.2021

Context-Enhanced Representation Learning for Single Image Deraining

verfasst von: Guoqing Wang, Changming Sun, Arcot Sowmya

Erschienen in: International Journal of Computer Vision | Ausgabe 5/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Perception of content and structure in images with rainstreaks or raindrops is challenging, and it often calls for robust deraining algorithms to remove the diversified rainy effects. Much progress has been made on the design of advanced encoder–decoder single image deraining networks. However, most of the existing networks are built in a blind manner and often produce over/under-deraining artefacts. In this paper, we point out, for the first time, that the unsatisfactory results are caused by the highly imbalanced distribution between rainy effects and varied background scenes. Ignoring this phenomenon results in the representation learned by the encoder being biased towards rainy regions, while paying less attention to the valuable contextual regions. To resolve this, a context-enhanced representation learning and deraining network is proposed with a novel two-branch encoder design. Specifically, one branch takes the rainy image directly as input for learning a mixed representation depicting the variation of both rainy regions and contextual regions, and another branch is guided by a carefully learned soft attention mask to learn an embedding only depicting the contextual regions. By combining the embeddings from these two branches with a carefully designed co-occurrence modelling module, and then improving the semantic property of the co-occurrence features via a bi-directional attention layer, the underlying imbalanced learning problem is resolved. Extensive experiments are carried out for removing rainstreaks and raindrops from both synthetic and real rainy images, and the proposed model is demonstrated to produce significantly better results than state-of-the-art models. In addition, comprehensive ablation studies are also performed to analyze the contributions of different designs. Code and pre-trained models will be publicly available at https://​github.​com/​RobinCSIRO/​CERLD-Net.​git.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
The detailed steps for obtaining the figures are provided in the Appendix.
 
3
This layer conveys information learned by the encoder, and also serves as input for the decoder to generate images, thus the behaviour of this layer determines the quality of the restored images. This claim is also supported by the disentangled representation learning theory in generative models (Diederik and Max 2013; Tschannen et al. 2018; Chen et al. 2016; Bengio et al. 2013; Locatello et al. 2018).
 
7
\(\alpha \) and \(\beta \) can also be interpreted as the quantized contribution of different patches to the final prediction.
 
Literatur
Zurück zum Zitat Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., et al. (2016). TensorFlow: A system for large-scale machine learning. In USENIX Symposium on Operating Systems Design and Implementation. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., et al. (2016). TensorFlow: A system for large-scale machine learning. In USENIX Symposium on Operating Systems Design and Implementation.
Zurück zum Zitat Abel, G., Krahenbuhl, P., Joost, V., & Bengio, T. (2018). Image-to-image translation for cross-domain disentanglement. In Advances in Neural Information Processing Systems (NeurIPS). Abel, G., Krahenbuhl, P., Joost, V., & Bengio, T. (2018). Image-to-image translation for cross-domain disentanglement. In Advances in Neural Information Processing Systems (NeurIPS).
Zurück zum Zitat Barnum, P. C., Narasimhan, S., & Kanade, T. (2010). Analysis of rain and snow in frequency space. International Journal of Computer Vision, 86(2), 256–275.CrossRef Barnum, P. C., Narasimhan, S., & Kanade, T. (2010). Analysis of rain and snow in frequency space. International Journal of Computer Vision, 86(2), 256–275.CrossRef
Zurück zum Zitat Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798–1828.CrossRef Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798–1828.CrossRef
Zurück zum Zitat Bossu, J., Hautière, N., & Tarel, J. P. (2011). Rain or snow detection in image sequences through use of a histogram of orientation of streaks. International Journal of Computer Vision, 93(3), 348–367.CrossRef Bossu, J., Hautière, N., & Tarel, J. P. (2011). Rain or snow detection in image sequences through use of a histogram of orientation of streaks. International Journal of Computer Vision, 93(3), 348–367.CrossRef
Zurück zum Zitat Brewer, N., & Liu, N. (2008). Using the shape characteristics of rain to identify and remove rain from video. In Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition and Structural and Syntactic Pattern Recognition. Brewer, N., & Liu, N. (2008). Using the shape characteristics of rain to identify and remove rain from video. In Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition and Structural and Syntactic Pattern Recognition.
Zurück zum Zitat Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., & Abbeel, P. (2016). InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS). Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., & Abbeel, P. (2016). InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS).
Zurück zum Zitat Chen, Y., & Pock, T. (2016). Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1256–1272.CrossRef Chen, Y., & Pock, T. (2016). Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1256–1272.CrossRef
Zurück zum Zitat Chen, Y. L., & Hsu, C. T. (2013). A generalized low-rank appearance model for spatio-temporally correlated rainstreaks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Chen, Y. L., & Hsu, C. T. (2013). A generalized low-rank appearance model for spatio-temporally correlated rainstreaks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Choi, Y., Choi, M., Kim, M., Ha, J. W., Kim, S. & Choo, J. (2018). StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Choi, Y., Choi, M., Kim, M., Ha, J. W., Kim, S. & Choo, J. (2018). StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2007). Color image denoising via sparse 3D collaborative filtering with grouping constraint in luminance-chrominance space. In Proceedings of the IEEE Conference on Image Processing (ICIP). Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2007). Color image denoising via sparse 3D collaborative filtering with grouping constraint in luminance-chrominance space. In Proceedings of the IEEE Conference on Image Processing (ICIP).
Zurück zum Zitat Diederik, P. K., & Max, W. (2013). Auto-encoding variational Bayes. In Proceedings of the International Conference on Learning Representations (ICLR). Diederik, P. K., & Max, W. (2013). Auto-encoding variational Bayes. In Proceedings of the International Conference on Learning Representations (ICLR).
Zurück zum Zitat Eigen, D., Krishnan, D., & Fergus, R. (2013). Restoring an image taken through a window covered with dirt or rain. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Eigen, D., Krishnan, D., & Fergus, R. (2013). Restoring an image taken through a window covered with dirt or rain. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
Zurück zum Zitat Fu, X., Huang, J., Ding, X., Liao, Y., & Paisley, J. (2017a). Clearing the skies: A deep network architecture for single-image rain removal. IEEE Transactions on Image Processing, 26(6), 2944–2956.MathSciNetCrossRefMATH Fu, X., Huang, J., Ding, X., Liao, Y., & Paisley, J. (2017a). Clearing the skies: A deep network architecture for single-image rain removal. IEEE Transactions on Image Processing, 26(6), 2944–2956.MathSciNetCrossRefMATH
Zurück zum Zitat Fu, X., Huang, J., Zeng, D., Huang, Y., Ding, X., & Paisley, J. (2017b). Removing rain from single images via a deep detail network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Fu, X., Huang, J., Zeng, D., Huang, Y., Ding, X., & Paisley, J. (2017b). Removing rain from single images via a deep detail network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Garg, K., & Nayar, S. K. (2004). Detection and removal of rain from videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Garg, K., & Nayar, S. K. (2004). Detection and removal of rain from videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Garg, K., & Nayar, S. K. (2007). Vision and rain. International Journal of Computer Vision, 75(1), 3–27.CrossRef Garg, K., & Nayar, S. K. (2007). Vision and rain. International Journal of Computer Vision, 75(1), 3–27.CrossRef
Zurück zum Zitat Ge, Y., Li, Z., Zhao, H., Yin, G., Yi, S., & Wang, X. (2018). FD-GAN: Pose-guided feature distilling GAN for robust person re-identification. In Advances in Neural Information Processing Systems (NeurIPS). Ge, Y., Li, Z., Zhao, H., Yin, G., Yi, S., & Wang, X. (2018). FD-GAN: Pose-guided feature distilling GAN for robust person re-identification. In Advances in Neural Information Processing Systems (NeurIPS).
Zurück zum Zitat He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Huang, D. A., Kang, L. W., Wang, Y. C., & Lin, C. W. (2013). Self-learning based image decomposition with applications to single image denoising. IEEE Transactions on Multimedia, 16(1), 83–93.CrossRef Huang, D. A., Kang, L. W., Wang, Y. C., & Lin, C. W. (2013). Self-learning based image decomposition with applications to single image denoising. IEEE Transactions on Multimedia, 16(1), 83–93.CrossRef
Zurück zum Zitat Huang, G., Liu, Z., Laurens, V. D., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Huang, G., Liu, Z., Laurens, V. D., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Huang, X., Liu, M., Belongie, S., & Kautz, J. (2018). Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV). Huang, X., Liu, M., Belongie, S., & Kautz, J. (2018). Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV).
Zurück zum Zitat Iizuka, S., Serra, E. S., & Ishikawa, H. (2017). Globally and locally consistent image completion. ACM Transactions on Graphics, 36(4), 107–123.CrossRef Iizuka, S., Serra, E. S., & Ishikawa, H. (2017). Globally and locally consistent image completion. ACM Transactions on Graphics, 36(4), 107–123.CrossRef
Zurück zum Zitat Isola, P., Zhu, J. Y., Zhou, T. H., & Efros, A. A. (2007). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Isola, P., Zhu, J. Y., Zhou, T. H., & Efros, A. A. (2007). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Kang, L. W., Lin, C. W., & Fu, Y. H. (2011). Automatic single-image-based rain streaks removal via image decomposition. IEEE Transactions on Image Processing, 21(4), 1742–1755.MathSciNetCrossRefMATH Kang, L. W., Lin, C. W., & Fu, Y. H. (2011). Automatic single-image-based rain streaks removal via image decomposition. IEEE Transactions on Image Processing, 21(4), 1742–1755.MathSciNetCrossRefMATH
Zurück zum Zitat Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR). Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR).
Zurück zum Zitat Li, G., He, X., Zhang, W., Chang, H., Dong, L., & Lin, L. (2018a). Non-locally enhanced encoder–decoder network for single image deraining. In Proceedings of the ACM International Conference on Multimedia (ACMMM). Li, G., He, X., Zhang, W., Chang, H., Dong, L., & Lin, L. (2018a). Non-locally enhanced encoder–decoder network for single image deraining. In Proceedings of the ACM International Conference on Multimedia (ACMMM).
Zurück zum Zitat Li, S., Araujo, I. B., Ren, W., Wang, Z., Tokuda, E. K., Junior, R. H., et al. (2019). Single image deraining: A comprehensive benchmark analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Li, S., Araujo, I. B., Ren, W., Wang, Z., Tokuda, E. K., Junior, R. H., et al. (2019). Single image deraining: A comprehensive benchmark analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Li, X., Wu, J., Lin, Z., Liu, H., & Zha, H. (2018b). Recurrent squeeze-and-excitation context aggregation net for single image deraining. In Proceedings of the European conference on computer vision (ECCV). Li, X., Wu, J., Lin, Z., Liu, H., & Zha, H. (2018b). Recurrent squeeze-and-excitation context aggregation net for single image deraining. In Proceedings of the European conference on computer vision (ECCV).
Zurück zum Zitat Li, Y., Tan, R. T., Guo, X., Lu, J., & Brown, M. S. (2016). Rain streak removal using layer priors. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). Li, Y., Tan, R. T., Guo, X., Lu, J., & Brown, M. S. (2016). Rain streak removal using layer priors. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).
Zurück zum Zitat Lin, T. Y., RoyChowdhury, A., & Maji, S. (2015). Bilinear CNN models for fine-grained visual recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Lin, T. Y., RoyChowdhury, A., & Maji, S. (2015). Bilinear CNN models for fine-grained visual recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
Zurück zum Zitat Locatello, F., Bauer, S., Lucic, M., Gelly, S., Schölkopf, B., & Bachem, O. (2018). Challenging common assumptions in the unsupervised learning of disentangled representations. In Advances in Neural Information Processing Systems (NeurIPS). Locatello, F., Bauer, S., Lucic, M., Gelly, S., Schölkopf, B., & Bachem, O. (2018). Challenging common assumptions in the unsupervised learning of disentangled representations. In Advances in Neural Information Processing Systems (NeurIPS).
Zurück zum Zitat Luo, Y., Xu, Y., & Ji, H. (2015). Removing rain from a single image via discriminative sparse coding. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Luo, Y., Xu, Y., & Ji, H. (2015). Removing rain from a single image via discriminative sparse coding. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
Zurück zum Zitat Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., & Frey, B. (2016). Adversarial autoencoders. In Proceedings of the International Conference on Learning Representations (ICLR). Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., & Frey, B. (2016). Adversarial autoencoders. In Proceedings of the International Conference on Learning Representations (ICLR).
Zurück zum Zitat Mao, X., Shen, C., & Yang, Y. (2016). Image restoration using convolutional auto-encoders with symmetric skip connections. In Advances in Neural Information Processing Systems (NeurIPS). Mao, X., Shen, C., & Yang, Y. (2016). Image restoration using convolutional auto-encoders with symmetric skip connections. In Advances in Neural Information Processing Systems (NeurIPS).
Zurück zum Zitat Mao, X., Li, Q., Xie, H., Lau, R., Wang, Z., & Smolley, S. P. (2017). Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Mao, X., Li, Q., Xie, H., Lau, R., Wang, Z., & Smolley, S. P. (2017). Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
Zurück zum Zitat Martin, D., Fowlkes, C., Tal, D., & Malik, J. (2001). A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Martin, D., Fowlkes, C., Tal, D., & Malik, J. (2001). A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
Zurück zum Zitat Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., & Efros, A. A. (2016). Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., & Efros, A. A. (2016). Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Qian, R., Tan, R. T., Yang, W., Su, J., & Liu, J. (2018). Attentive generative adversarial network for raindrop removal from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Qian, R., Tan, R. T., Yang, W., Su, J., & Liu, J. (2018). Attentive generative adversarial network for raindrop removal from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Ren, D., Zuo, W., Hu, Q., Zhu, P., & Meng., D. (2019). Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Ren, D., Zuo, W., Hu, Q., Zhu, P., & Meng., D. (2019). Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Ren, W., Ma, L., Zhang, J., Pan, J., Cao, X., Liu, W., et al. (2018). Gated fusion network for single image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Ren, W., Ma, L., Zhang, J., Pan, J., Cao, X., Liu, W., et al. (2018). Gated fusion network for single image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).
Zurück zum Zitat Santhaseelan, V., & Asari, V. K. (2015). Utilizing local phase information to remove rain from video. International Journal of Computer Vision, 112(1), 71–89.CrossRef Santhaseelan, V., & Asari, V. K. (2015). Utilizing local phase information to remove rain from video. International Journal of Computer Vision, 112(1), 71–89.CrossRef
Zurück zum Zitat Shih, Y. F., Yeh, Y. M., Lin, Y. Y., Weng, M. F., Lu, Y. C., & Chuang, Y. Y. (2017). Deep co-occurrence feature learning for visual object recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Shih, Y. F., Yeh, Y. M., Lin, Y. Y., Weng, M. F., Lu, Y. C., & Chuang, Y. Y. (2017). Deep co-occurrence feature learning for visual object recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR). Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR).
Zurück zum Zitat Sun, S. H., Fan, S. P., & Wang, Y. C. (2016). Exploiting image structural similarity for single image rain removal. In Proceedings of the IEEE Conference on Image Processing (ICIP). Sun, S. H., Fan, S. P., & Wang, Y. C. (2016). Exploiting image structural similarity for single image rain removal. In Proceedings of the IEEE Conference on Image Processing (ICIP).
Zurück zum Zitat Tran, L., Yin, X., & Liu, X. (2017). Disentangled representation learning GAN for pose-invariant face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Tran, L., Yin, X., & Liu, X. (2017). Disentangled representation learning GAN for pose-invariant face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Tschannen, M., Bachem, O., & Lucic, M. (2018). Recent advances in autoencoder-based representation learning. In Advances in Neural Information Processing Systems (NeurIPS), Bayesian Deep Learning Workshop. Tschannen, M., Bachem, O., & Lucic, M. (2018). Recent advances in autoencoder-based representation learning. In Advances in Neural Information Processing Systems (NeurIPS), Bayesian Deep Learning Workshop.
Zurück zum Zitat Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., & Manzagol, P. (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11, 3371–3408.MathSciNetMATH Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., & Manzagol, P. (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11, 3371–3408.MathSciNetMATH
Zurück zum Zitat Wang, G., Sun, C., & Sowmya, A. (2019a). ERL-Net: Entangled representation learning for single image de-raining. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Wang, G., Sun, C., & Sowmya, A. (2019a). ERL-Net: Entangled representation learning for single image de-raining. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
Zurück zum Zitat Wang, T., Yang, X., Xu, K., Chen, S., Zhang, Q., & Lau, R. W. (2019b). Spatial attentive single-image deraining with a high quality real rain dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Wang, T., Yang, X., Xu, K., Chen, S., Zhang, Q., & Lau, R. W. (2019b). Spatial attentive single-image deraining with a high quality real rain dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Wang, X., Girshick, R., Gupta, A., & He, K. (2018). Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Wang, X., Girshick, R., Gupta, A., & He, K. (2018). Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Wei, W., Meng, D., Zhao, Q., Xu, Z., & Wu, Y. (2019). Semi-supervised transfer learning for image rain removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Wei, W., Meng, D., Zhao, Q., Xu, Z., & Wu, Y. (2019). Semi-supervised transfer learning for image rain removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Yang, W., Tan, R. T., Feng, J., Guo, Z., Yan, S., & Liu, J. (2019). Joint rain detection and removal from a single image with contextualized deep networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(6), 1377–1393.CrossRef Yang, W., Tan, R. T., Feng, J., Guo, Z., Yan, S., & Liu, J. (2019). Joint rain detection and removal from a single image with contextualized deep networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(6), 1377–1393.CrossRef
Zurück zum Zitat Yang, W., Tan, R. T., Wang, S., Fang, Y., & Liu, J. (2020). Single image deraining: From model-based to data-driven and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence. Yang, W., Tan, R. T., Wang, S., Fang, Y., & Liu, J. (2020). Single image deraining: From model-based to data-driven and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Zurück zum Zitat Yasarla, R., & Patel, V. M. (2019). Uncertainty guided multi-scale residual learning- using a cycle spinning CNN for single image deraining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Yasarla, R., & Patel, V. M. (2019). Uncertainty guided multi-scale residual learning- using a cycle spinning CNN for single image deraining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Yasarla, R., & Patel, V. M. (2020). Confidence measure guided single image de-raining. IEEE Transactions on Image Processing, 29, 4544–4555.CrossRef Yasarla, R., & Patel, V. M. (2020). Confidence measure guided single image de-raining. IEEE Transactions on Image Processing, 29, 4544–4555.CrossRef
Zurück zum Zitat You, S., Tan, R. T., Kawakami, R., Mukaigawa, Y., & Ikeuchi, K. (2015). Adherent raindrop modeling, detectionand removal in video. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(9), 1721–1733.CrossRef You, S., Tan, R. T., Kawakami, R., Mukaigawa, Y., & Ikeuchi, K. (2015). Adherent raindrop modeling, detectionand removal in video. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(9), 1721–1733.CrossRef
Zurück zum Zitat Yu, F., & Koltun, V. (2015). Multi-scale context aggregation by dilated convolutions. In Proceedings of the International Conference on Learning Representations (ICLR). Yu, F., & Koltun, V. (2015). Multi-scale context aggregation by dilated convolutions. In Proceedings of the International Conference on Learning Representations (ICLR).
Zurück zum Zitat Zhang, H., & Patel, V. M. (2018). Density-aware single image deraining using a multi-stream dense network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Zhang, H., & Patel, V. M. (2018). Density-aware single image deraining using a multi-stream dense network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. (2017a). Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Transactions on Image Processing, 26(7), 3142–3155.MathSciNetCrossRefMATH Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. (2017a). Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Transactions on Image Processing, 26(7), 3142–3155.MathSciNetCrossRefMATH
Zurück zum Zitat Zhang, K., Zuo, W., Gu, S., & Zhang, L. (2017b). Learning deep CNN denoiser prior for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Zhang, K., Zuo, W., Gu, S., & Zhang, L. (2017b). Learning deep CNN denoiser prior for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Zhang, K., Zuo, W., & Zhang, L. (2018). FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Transactions on Image Processing, 27(9), 4608–4622.MathSciNetCrossRef Zhang, K., Zuo, W., & Zhang, L. (2018). FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Transactions on Image Processing, 27(9), 4608–4622.MathSciNetCrossRef
Zurück zum Zitat Zhou, B., Bau, D., Oliva, A., & Torralba, A. (2019). Interpreting deep visual representations via network dissection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(9), 2131–2145.CrossRef Zhou, B., Bau, D., Oliva, A., & Torralba, A. (2019). Interpreting deep visual representations via network dissection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(9), 2131–2145.CrossRef
Metadaten
Titel
Context-Enhanced Representation Learning for Single Image Deraining
verfasst von
Guoqing Wang
Changming Sun
Arcot Sowmya
Publikationsdatum
02.03.2021
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 5/2021
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-020-01425-9

Weitere Artikel der Ausgabe 5/2021

International Journal of Computer Vision 5/2021 Zur Ausgabe

Premium Partner