Skip to main content
Erschienen in: International Journal of Computer Vision 11/2023

04.07.2023 | S.I. : Traditional Computer Vision in the Age of Deep Learning

Improving Domain Adaptation Through Class Aware Frequency Transformation

verfasst von: Vikash Kumar, Himanshu Patil, Rohit Lal, Anirban Chakraborty

Erschienen in: International Journal of Computer Vision | Ausgabe 11/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this work, we explore the usage of the frequency transformation for reducing the domain shift between the source and target domain (e.g., synthetic image and real image respectively) towards solving the domain adaptation task. Most of the unsupervised domain adaptation (UDA) algorithms focus on reducing the global domain shift between labelled source and unlabelled target domains by matching the marginal distributions under a small domain gap assumption. UDA performance degrades for the cases where the domain gap between source and target distribution is large. In order to bring the source and the target domains closer, we propose a novel approach based on traditional image processing technique Class Aware Frequency Transformation (CAFT) that utilizes pseudo label based class consistent low-frequency swapping for improving the overall performance of the existing UDA algorithms. The proposed approach, when compared with the state-of-the-art deep learning based methods, is computationally more efficient and can easily be plugged into any existing UDA algorithm to improve its performance. Additionally, we introduce a novel approach based on absolute difference of top-2 class prediction probabilities for filtering target pseudo labels into clean and noisy sets. Samples with clean pseudo labels can be used to improve the performance of unsupervised learning algorithms. We name the overall framework as CAFT++. We evaluate the same on the top of different UDA algorithms across many public domain adaptation datasets. Our extensive experiments indicate that CAFT++ is able to achieve significant performance gains across all the popular benchmarks.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Cao, Z., Long, M., Wang, J., & Jordan, M. I. (2018a). Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2724–2732). Cao, Z., Long, M., Wang, J., & Jordan, M. I. (2018a). Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2724–2732).
Zurück zum Zitat Cao, Z., Ma, L., Long, M., & Wang, J. (2018b). Partial adversarial domain adaptation. In Proceedings of the European conference on computer vision (ECCV) (pp. 135–150). Cao, Z., Ma, L., Long, M., & Wang, J. (2018b). Partial adversarial domain adaptation. In Proceedings of the European conference on computer vision (ECCV) (pp. 135–150).
Zurück zum Zitat Chen, C., Xie, W., Huang, W., Rong, Y., Ding, X., Huang, Y. et al. (2019). Progressive feature alignment for unsupervised domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 627–636). Chen, C., Xie, W., Huang, W., Rong, Y., Ding, X., Huang, Y. et al. (2019). Progressive feature alignment for unsupervised domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 627–636).
Zurück zum Zitat Ganin, Y., & Lempitsky, V. (2015). Unsupervised domain adaptation by backpropagation. In International conference on machine learning (pp. 1180–1189). PMLR. Ganin, Y., & Lempitsky, V. (2015). Unsupervised domain adaptation by backpropagation. In International conference on machine learning (pp. 1180–1189). PMLR.
Zurück zum Zitat Gao, Z., Zhang, S., Huang, K., Wang, Q., & Zhong, C. (2021). Gradient distribution alignment certificates better adversarial domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 8937–8946). Gao, Z., Zhang, S., Huang, K., Wang, Q., & Zhong, C. (2021). Gradient distribution alignment certificates better adversarial domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 8937–8946).
Zurück zum Zitat Gawlikowski, J., Tassi, C. R. N., Ali, M., Lee, J., Humt, M., Feng, J., et al. (2021). A survey of uncertainty in deep neural networks. arXiv preprint arXiv:2107.03342 Gawlikowski, J., Tassi, C. R. N., Ali, M., Lee, J., Humt, M., Feng, J., et al. (2021). A survey of uncertainty in deep neural networks. arXiv preprint arXiv:​2107.​03342
Zurück zum Zitat Ghifary, M., Kleijn, W. B., Zhang, M., Balduzzi, D., & Li, W. (2016). Deep reconstruction-classification networks for unsupervised domain adaptation. In European conference on computer vision (pp. 597–613). Springer. Ghifary, M., Kleijn, W. B., Zhang, M., Balduzzi, D., & Li, W. (2016). Deep reconstruction-classification networks for unsupervised domain adaptation. In European conference on computer vision (pp. 597–613). Springer.
Zurück zum Zitat Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., & Smola, A. (2006). A kernel method for the two-sample-problem. Advances in Neural Information Processing Systems, 19, 513–520.MATH Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., & Smola, A. (2006). A kernel method for the two-sample-problem. Advances in Neural Information Processing Systems, 19, 513–520.MATH
Zurück zum Zitat He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778). He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).
Zurück zum Zitat Huang, X. & Belongie, S. (2017). Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision (pp. 1501–1510). Huang, X. & Belongie, S. (2017). Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision (pp. 1501–1510).
Zurück zum Zitat Jin, X., Lan, C., Zeng, W., & Chen, Z. (2021). Re-energizing domain discriminator with sample relabeling for adversarial domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9174–9183). Jin, X., Lan, C., Zeng, W., & Chen, Z. (2021). Re-energizing domain discriminator with sample relabeling for adversarial domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9174–9183).
Zurück zum Zitat Jin, Y., Wang, X., Long, M., & Wang, J. (2020). Minimum class confusion for versatile domain adaptation. In European conference on computer vision (pp. 464–480). Springer. Jin, Y., Wang, X., Long, M., & Wang, J. (2020). Minimum class confusion for versatile domain adaptation. In European conference on computer vision (pp. 464–480). Springer.
Zurück zum Zitat Kang, G., Jiang, L., Yang, Y., & Hauptmann, A. G. (2019). Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4893–4902). Kang, G., Jiang, L., Yang, Y., & Hauptmann, A. G. (2019). Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4893–4902).
Zurück zum Zitat Kumar, V., Srivastava, S., Lal, R., & Chakraborty, A. (2021). Caft: Class aware frequency transform for reducing domain gap. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV) workshops (pp. 2525–2534). Kumar, V., Srivastava, S., Lal, R., & Chakraborty, A. (2021). Caft: Class aware frequency transform for reducing domain gap. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV) workshops (pp. 2525–2534).
Zurück zum Zitat Lal, R., Gaur, A., Iyer, A., Shaikh, M. A., Agrawal, R., & Chiddarwar, S. (2021). Open-set multi-source multi-target domain adaptation. Lal, R., Gaur, A., Iyer, A., Shaikh, M. A., Agrawal, R., & Chiddarwar, S. (2021). Open-set multi-source multi-target domain adaptation.
Zurück zum Zitat Lee, D.-H. et al. (2013). Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML (Vol. 3, p. 896). Lee, D.-H. et al. (2013). Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML (Vol. 3, p. 896).
Zurück zum Zitat Li, H., Krček, M., & Perin, G. (2020a). A comparison of weight initializers in deep learning-based side-channel analysis. In International conference on applied cryptography and network security (pp. 126–143). Springer. Li, H., Krček, M., & Perin, G. (2020a). A comparison of weight initializers in deep learning-based side-channel analysis. In International conference on applied cryptography and network security (pp. 126–143). Springer.
Zurück zum Zitat Li, J., Socher, R., & Hoi, S. C. (2020b). Dividemix: Learning with noisy labels as semi-supervised learning. arXiv preprint arXiv:2002.07394 Li, J., Socher, R., & Hoi, S. C. (2020b). Dividemix: Learning with noisy labels as semi-supervised learning. arXiv preprint arXiv:​2002.​07394
Zurück zum Zitat Li, S., Xie, M., Lv, F., Liu, C. H., Liang, J., Qin, C., & Li, W. (2021). Semantic concentration for domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 9102–9111). Li, S., Xie, M., Lv, F., Liu, C. H., Liang, J., Qin, C., & Li, W. (2021). Semantic concentration for domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 9102–9111).
Zurück zum Zitat Liang, J., Hu, D., & Feng, J. (2020). Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International conference on machine learning (pp. 6028–6039). PMLR. Liang, J., Hu, D., & Feng, J. (2020). Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International conference on machine learning (pp. 6028–6039). PMLR.
Zurück zum Zitat Liang, J., Hu, D., & Feng, J. (2021). Domain adaptation with auxiliary target domain-oriented classifier. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 16632–16642). Liang, J., Hu, D., & Feng, J. (2021). Domain adaptation with auxiliary target domain-oriented classifier. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 16632–16642).
Zurück zum Zitat Long, M., Cao, Y., Wang, J., & Jordan, M. (2015). Learning transferable features with deep adaptation networks. In International conference on machine learning (pp. 97–105). PMLR. Long, M., Cao, Y., Wang, J., & Jordan, M. (2015). Learning transferable features with deep adaptation networks. In International conference on machine learning (pp. 97–105). PMLR.
Zurück zum Zitat Long, M., Cao, Z., Wang, J., & Jordan, M. I. (2017a). Conditional adversarial domain adaptation. arXiv preprint arXiv:1705.10667. Long, M., Cao, Z., Wang, J., & Jordan, M. I. (2017a). Conditional adversarial domain adaptation. arXiv preprint arXiv:​1705.​10667.
Zurück zum Zitat Long, M., Cao, Z., Wang, J., & Jordan, M. I. (2018). Conditional adversarial domain adaptation. Advances in neural information processing systems, 31. Long, M., Cao, Z., Wang, J., & Jordan, M. I. (2018). Conditional adversarial domain adaptation. Advances in neural information processing systems, 31.
Zurück zum Zitat Long, M., Cao, Z., Wang, J., & Jordan, M. I. (2018). Conditional adversarial domain adaptation. In Proceedings of the International Conference on Neural Information Processing Systems (pp. 1647–1657). Long, M., Cao, Z., Wang, J., & Jordan, M. I. (2018). Conditional adversarial domain adaptation. In Proceedings of the International Conference on Neural Information Processing Systems (pp. 1647–1657).
Zurück zum Zitat O’Mahony, N., Campbell, S., Carvalho, A., Harapanahalli, S., Hernandez, G. V., Krpalkova, L., Riordan, D., & Walsh, J. (2019). Deep learning vs. traditional computer vision. In Science and information conference (pp. 128–144). Springer. O’Mahony, N., Campbell, S., Carvalho, A., Harapanahalli, S., Hernandez, G. V., Krpalkova, L., Riordan, D., & Walsh, J. (2019). Deep learning vs. traditional computer vision. In Science and information conference (pp. 128–144). Springer.
Zurück zum Zitat Pan, Y., Yao, T., Li, Y., Wang, Y., Ngo, C.-W., & Mei, T. (2019). Transferrable prototypical networks for unsupervised domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2239–2247). Pan, Y., Yao, T., Li, Y., Wang, Y., Ngo, C.-W., & Mei, T. (2019). Transferrable prototypical networks for unsupervised domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2239–2247).
Zurück zum Zitat Panareda Busto, P. & Gall, J. (2017). Open set domain adaptation. In Proceedings of the IEEE international conference on computer vision (pp. 754–763). Panareda Busto, P. & Gall, J. (2017). Open set domain adaptation. In Proceedings of the IEEE international conference on computer vision (pp. 754–763).
Zurück zum Zitat Pei, Z., Cao, Z., Long, M., & Wang, J. (2018). Multi-adversarial domain adaptation. In Thirty-second AAAI conference on artificial intelligence. Pei, Z., Cao, Z., Long, M., & Wang, J. (2018). Multi-adversarial domain adaptation. In Thirty-second AAAI conference on artificial intelligence.
Zurück zum Zitat Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., & Saenko, K. (2017). Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924 Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., & Saenko, K. (2017). Visda: The visual domain adaptation challenge. arXiv preprint arXiv:​1710.​06924
Zurück zum Zitat Perez, L. & Wang, J. (2017). The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621 Perez, L. & Wang, J. (2017). The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:​1712.​04621
Zurück zum Zitat Reynolds, D. A. (2009). Gaussian mixture models. Encyclopedia of biometrics, 741, 659–663.CrossRef Reynolds, D. A. (2009). Gaussian mixture models. Encyclopedia of biometrics, 741, 659–663.CrossRef
Zurück zum Zitat Roy, S., Krivosheev, E., Zhong, Z., Sebe, N., & Ricci, E. (2021). Curriculum graph co-teaching for multi-target domain adaptation. Roy, S., Krivosheev, E., Zhong, Z., Sebe, N., & Ricci, E. (2021). Curriculum graph co-teaching for multi-target domain adaptation.
Zurück zum Zitat Saenko, K., Kulis, B., Fritz, M., & Darrell, T. (2010). Adapting visual category models to new domains. In European conference on computer vision (pp. 213–226). Springer. Saenko, K., Kulis, B., Fritz, M., & Darrell, T. (2010). Adapting visual category models to new domains. In European conference on computer vision (pp. 213–226). Springer.
Zurück zum Zitat Saito, K., Ushiku, Y., & Harada, T. (2017). Asymmetric tri-training for unsupervised domain adaptation. In International conference on machine learning (pp. 2988–2997). PMLR. Saito, K., Ushiku, Y., & Harada, T. (2017). Asymmetric tri-training for unsupervised domain adaptation. In International conference on machine learning (pp. 2988–2997). PMLR.
Zurück zum Zitat Saito, K., Watanabe, K., Ushiku, Y., &Harada, T. (2018a). Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 3723–3732). Saito, K., Watanabe, K., Ushiku, Y., &Harada, T. (2018a). Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 3723–3732).
Zurück zum Zitat Saito, K., Yamamoto, S., Ushiku, Y., & Harada, T. (2018b). Open set domain adaptation by backpropagation. In Proceedings of the European conference on computer vision (ECCV) (pp. 153–168). Saito, K., Yamamoto, S., Ushiku, Y., & Harada, T. (2018b). Open set domain adaptation by backpropagation. In Proceedings of the European conference on computer vision (ECCV) (pp. 153–168).
Zurück zum Zitat Sun, B., Feng, J., & Saenko, K. (2016). Return of frustratingly easy domain adaptation. In Proceedings of the AAAI conference on artificial intelligence (Vol. 30). Sun, B., Feng, J., & Saenko, K. (2016). Return of frustratingly easy domain adaptation. In Proceedings of the AAAI conference on artificial intelligence (Vol. 30).
Zurück zum Zitat Sun, B. & Saenko, K. (2016). Deep coral: Correlation alignment for deep domain adaptation. In European conference on computer vision (pp. 443–450). Springer. Sun, B. & Saenko, K. (2016). Deep coral: Correlation alignment for deep domain adaptation. In European conference on computer vision (pp. 443–450). Springer.
Zurück zum Zitat Tan, C., Sun, F., Kong, T., Zhang, W., Yang, C., & Liu, C. (2018). A survey on deep transfer learning. In International conference on artificial neural networks (pp. 270–279). Springer. Tan, C., Sun, F., Kong, T., Zhang, W., Yang, C., & Liu, C. (2018). A survey on deep transfer learning. In International conference on artificial neural networks (pp. 270–279). Springer.
Zurück zum Zitat Tzeng, E., Hoffman, J., Saenko, K., & Darrell, T. (2017). Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7167–7176). Tzeng, E., Hoffman, J., Saenko, K., & Darrell, T. (2017). Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7167–7176).
Zurück zum Zitat Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., & Darrell, T. (2014). Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474 Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., & Darrell, T. (2014). Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:​1412.​3474
Zurück zum Zitat Venkateswara, H., Eusebio, J., Chakraborty, S., & Panchanathan, S. (2017). Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5018–5027). Venkateswara, H., Eusebio, J., Chakraborty, S., & Panchanathan, S. (2017). Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5018–5027).
Zurück zum Zitat Wang, Q., Meng, F., & Breckon, T. P. (2020). Data augmentation with norm-vae for unsupervised domain adaptation. arXiv preprint arXiv:2012.00848 Wang, Q., Meng, F., & Breckon, T. P. (2020). Data augmentation with norm-vae for unsupervised domain adaptation. arXiv preprint arXiv:​2012.​00848
Zurück zum Zitat Wang, Y., Peng, J., & Zhang, Z. (2021). Uncertainty-aware pseudo label refinery for domain adaptive semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9092–9101). Wang, Y., Peng, J., & Zhang, Z. (2021). Uncertainty-aware pseudo label refinery for domain adaptive semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9092–9101).
Zurück zum Zitat Wang, Z., Dai, Z., Póczos, B., & Carbonell, J. (2019). Characterizing and avoiding negative transfer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11293–11302). Wang, Z., Dai, Z., Póczos, B., & Carbonell, J. (2019). Characterizing and avoiding negative transfer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11293–11302).
Zurück zum Zitat Wilson, G., & Cook, D. J. (2020). A survey of unsupervised deep domain adaptation. ACM Transactions on Intelligent Systems and Technology (TIST), 11(5), 1–46.CrossRef Wilson, G., & Cook, D. J. (2020). A survey of unsupervised deep domain adaptation. ACM Transactions on Intelligent Systems and Technology (TIST), 11(5), 1–46.CrossRef
Zurück zum Zitat Yang, Y. & Soatto, S. (2020). Fda: Fourier domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4085–4095). Yang, Y. & Soatto, S. (2020). Fda: Fourier domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4085–4095).
Zurück zum Zitat Zhang, H., Cisse, M., Dauphin, Y. N., & Lopez-Paz, D. (2018a). mixup: Beyond empirical risk minimization. Zhang, H., Cisse, M., Dauphin, Y. N., & Lopez-Paz, D. (2018a). mixup: Beyond empirical risk minimization.
Zurück zum Zitat Zhang, J., Ding, Z., Li, W., & Ogunbona, P. (2018b). Importance weighted adversarial nets for partial domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8156–8164). Zhang, J., Ding, Z., Li, W., & Ogunbona, P. (2018b). Importance weighted adversarial nets for partial domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8156–8164).
Zurück zum Zitat Zhang, W., Ouyang, W., Li, W., & Xu, D. (2018c). Collaborative and adversarial network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3801–3809). Zhang, W., Ouyang, W., Li, W., & Xu, D. (2018c). Collaborative and adversarial network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3801–3809).
Zurück zum Zitat Zhang, Y., Liu, T., Long, M., & Jordan, M. (2019). Bridging theory and algorithm for domain adaptation. In International conference on machine learning (pp. 7404–7413). PMLR. Zhang, Y., Liu, T., Long, M., & Jordan, M. (2019). Bridging theory and algorithm for domain adaptation. In International conference on machine learning (pp. 7404–7413). PMLR.
Zurück zum Zitat Zhao, H., Des Combes, R. T., Zhang, K., & Gordon, G. (2019). On learning invariant representations for domain adaptation. In International conference on machine learning (pp. 7523–7532). PMLR. Zhao, H., Des Combes, R. T., Zhang, K., & Gordon, G. (2019). On learning invariant representations for domain adaptation. In International conference on machine learning (pp. 7523–7532). PMLR.
Zurück zum Zitat Zhu, J.-Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223–2232). Zhu, J.-Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223–2232).
Zurück zum Zitat Zhu, Y., Zhuang, F., Wang, J., Ke, G., Chen, J., Bian, J., Xiong, H., & He, Q. (2020). Deep subdomain adaptation network for image classification. IEEE Transactions on Neural Networks and Learning Systems. Zhu, Y., Zhuang, F., Wang, J., Ke, G., Chen, J., Bian, J., Xiong, H., & He, Q. (2020). Deep subdomain adaptation network for image classification. IEEE Transactions on Neural Networks and Learning Systems.
Metadaten
Titel
Improving Domain Adaptation Through Class Aware Frequency Transformation
verfasst von
Vikash Kumar
Himanshu Patil
Rohit Lal
Anirban Chakraborty
Publikationsdatum
04.07.2023
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 11/2023
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-023-01810-0

Weitere Artikel der Ausgabe 11/2023

International Journal of Computer Vision 11/2023 Zur Ausgabe

OriginalPaper

Deep Corner

Premium Partner