Skip to main content
Erschienen in: International Journal of Computer Vision 3/2024

17.10.2023

MixStyle Neural Networks for Domain Generalization and Adaptation

verfasst von: Kaiyang Zhou, Yongxin Yang, Yu Qiao, Tao Xiang

Erschienen in: International Journal of Computer Vision | Ausgabe 3/2024

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Neural networks do not generalize well to unseen data with domain shifts—a longstanding problem in machine learning and AI. To overcome the problem, we propose MixStyle, a simple plug-and-play, parameter-free module that can improve domain generalization performance without the need to collect more data or increase model capacity. The design of MixStyle is simple: it mixes the feature statistics of two random instances in a single forward pass during training. The idea is grounded by the finding from recent style transfer research that feature statistics capture image style information, which essentially defines visual domains. Therefore, mixing feature statistics can be seen as an efficient way to synthesize new domains in the feature space, thus achieving data augmentation. MixStyle is easy to implement with a few lines of code, does not require modification to training objectives, and can fit a variety of learning paradigms including supervised domain generalization, semi-supervised domain generalization, and unsupervised domain adaptation. Our experiments show that MixStyle can significantly boost out-of-distribution generalization performance across a wide range of tasks including image recognition, instance retrieval and reinforcement learning. The source code is released at https://​github.​com/​KaiyangZhou/​mixstyle-release.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
3
We follow the original train/test split in each dataset for evaluation.
 
6
We do not use batch normalization or dropout because they are detrimental to the performance, as suggested by Igl et al. (Igl et al., 2019).
 
8
Note that for fair comparison we only select the baselines that share a similar implementation including the model architecture.
 
Literatur
Zurück zum Zitat Balaji, Y., Sankaranarayanan, S., & Chellappa, R. (2018). Metareg: Towards domain generalization using meta-regularization. In: NeurIPS. Balaji, Y., Sankaranarayanan, S., & Chellappa, R. (2018). Metareg: Towards domain generalization using meta-regularization. In: NeurIPS.
Zurück zum Zitat Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., & Vaughan, J. W. (2010). A theory of learning from different domains. ML. Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., & Vaughan, J. W. (2010). A theory of learning from different domains. ML.
Zurück zum Zitat Blanchard, G., Lee, G., & Scott, C. (2011). Generalizing from several related classification tasks to a new unlabeled sample. In: NeurIPS. Blanchard, G., Lee, G., & Scott, C. (2011). Generalizing from several related classification tasks to a new unlabeled sample. In: NeurIPS.
Zurück zum Zitat Carlucci, F. M., D’Innocente, A., Bucci, S., Caputo, B., & Tommasi, T. (2019). Domain generalization by solving jigsaw puzzles. In: CVPR. Carlucci, F. M., D’Innocente, A., Bucci, S., Caputo, B., & Tommasi, T. (2019). Domain generalization by solving jigsaw puzzles. In: CVPR.
Zurück zum Zitat Cobbe, K., Klimov, O., Hesse, C., Kim, T., & Schulman, J. (2019). Quantifying generalization in reinforcement learning. In: ICML. Cobbe, K., Klimov, O., Hesse, C., Kim, T., & Schulman, J. (2019). Quantifying generalization in reinforcement learning. In: ICML.
Zurück zum Zitat Cubuk, E. D., Zoph, B., Shlens, J., & Le, Q. V. (2019). Randaugment: Practical data augmentation with no separate search. arXiv preprint arXiv:1909.13719. Cubuk, E. D., Zoph, B., Shlens, J., & Le, Q. V. (2019). Randaugment: Practical data augmentation with no separate search. arXiv preprint arXiv:​1909.​13719.
Zurück zum Zitat Deng, Z., Luo, Y., & Zhu, J. (2019). Cluster alignment with a teacher for unsupervised domain adaptation. In: ICCV. Deng, Z., Luo, Y., & Zhu, J. (2019). Cluster alignment with a teacher for unsupervised domain adaptation. In: ICCV.
Zurück zum Zitat DeVries, T., & Taylor, G.W. (2017). Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552. DeVries, T., & Taylor, G.W. (2017). Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:​1708.​04552.
Zurück zum Zitat DeVries, T., & Taylor, G.W. (2017). Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552. DeVries, T., & Taylor, G.W. (2017). Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:​1708.​04552.
Zurück zum Zitat Ding, Z., & Fu, Y. (2017). Deep domain generalization with structured low-rank constraint. TIP. Ding, Z., & Fu, Y. (2017). Deep domain generalization with structured low-rank constraint. TIP.
Zurück zum Zitat Dou, Q., Castro, D. C., Kamnitsas, K., & Glocker, B. (2019). Domain generalization via model-agnostic learning of semantic features. In: NeurIPS. Dou, Q., Castro, D. C., Kamnitsas, K., & Glocker, B. (2019). Domain generalization via model-agnostic learning of semantic features. In: NeurIPS.
Zurück zum Zitat Dumoulin, V., Shlens, J., & Kudlur, M. (2017). A learned representation for artistic style. In: ICLR. Dumoulin, V., Shlens, J., & Kudlur, M. (2017). A learned representation for artistic style. In: ICLR.
Zurück zum Zitat Espeholt, L., Soyer, H., Munos, R., Simonyan, K., Mnih, V., Ward, T., Doron, Y., Firoiu, V., Harley, T., & Dunning, I., et al. (2018). Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In: ICML. Espeholt, L., Soyer, H., Munos, R., Simonyan, K., Mnih, V., Ward, T., Doron, Y., Firoiu, V., Harley, T., & Dunning, I., et al. (2018). Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In: ICML.
Zurück zum Zitat Fan, Q., Segu, M., Tai, Y. W., Yu, F., Tang, C. K., Schiele, B., & Dai, D. (2022). Normalization perturbation: A simple domain generalization method for real-world domain shifts. arXiv preprint arXiv:2211.04393. Fan, Q., Segu, M., Tai, Y. W., Yu, F., Tang, C. K., Schiele, B., & Dai, D. (2022). Normalization perturbation: A simple domain generalization method for real-world domain shifts. arXiv preprint arXiv:​2211.​04393.
Zurück zum Zitat Farebrother, J., Machado, M. C., & Bowling, M. (2018). Generalization and regularization in dqn. arXiv preprint arXiv:1810.00123. Farebrother, J., Machado, M. C., & Bowling, M. (2018). Generalization and regularization in dqn. arXiv preprint arXiv:​1810.​00123.
Zurück zum Zitat Gamrian, S., & Goldberg, Y. (2019). Transfer learning for related reinforcement learning tasks via image-to-image translation. In: ICML. Gamrian, S., & Goldberg, Y. (2019). Transfer learning for related reinforcement learning tasks via image-to-image translation. In: ICML.
Zurück zum Zitat Ganin, Y., & Lempitsky, V. S. (2015). Unsupervised domain adaptation by backpropagation. In: ICML. Ganin, Y., & Lempitsky, V. S. (2015). Unsupervised domain adaptation by backpropagation. In: ICML.
Zurück zum Zitat Ghiasi, G., Lin, T. Y., & Le, Q. V. (2018). Dropblock: A regularization method for convolutional networks. In: NeurIPS. Ghiasi, G., Lin, T. Y., & Le, Q. V. (2018). Dropblock: A regularization method for convolutional networks. In: NeurIPS.
Zurück zum Zitat Gong, R., Li, W., Chen, Y., & Van Gool, L. (2019). Dlow: Domain flow for adaptation and generalization. In: CVPR. Gong, R., Li, W., Chen, Y., & Van Gool, L. (2019). Dlow: Domain flow for adaptation and generalization. In: CVPR.
Zurück zum Zitat He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In: CVPR. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In: CVPR.
Zurück zum Zitat Hoffman, J., Tzeng, E., Park, T., Zhu, J. Y., Isola, P., Saenko, K., Efros, A., & Darrell, T. (2018). Cycada: Cycle-consistent adversarial domain adaptation. In: ICML. Hoffman, J., Tzeng, E., Park, T., Zhu, J. Y., Isola, P., Saenko, K., Efros, A., & Darrell, T. (2018). Cycada: Cycle-consistent adversarial domain adaptation. In: ICML.
Zurück zum Zitat Huang, X., & Belongie, S. (2017). Arbitrary style transfer in real-time with adaptive instance normalization. In: ICCV. Huang, X., & Belongie, S. (2017). Arbitrary style transfer in real-time with adaptive instance normalization. In: ICCV.
Zurück zum Zitat Huynh, S. V. (2021). A strong baseline for vehicle re-identification. In: CVPR-W. Huynh, S. V. (2021). A strong baseline for vehicle re-identification. In: CVPR-W.
Zurück zum Zitat Igl, M., Ciosek, K., Li, Y., Tschiatschek, S., Zhang, C., Devlin, S., & Hofmann, K. (2019). Generalization in reinforcement learning with selective noise injection and information bottleneck. In: NeurIPS. Igl, M., Ciosek, K., Li, Y., Tschiatschek, S., Zhang, C., Devlin, S., & Hofmann, K. (2019). Generalization in reinforcement learning with selective noise injection and information bottleneck. In: NeurIPS.
Zurück zum Zitat Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: ICML. Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: ICML.
Zurück zum Zitat Justesen, N., Torrado, R. R., Bontrager, P., Khalifa, A., Togelius, J., & Risi, S. (2018). Illuminating generalization in deep reinforcement learning through procedural level generation. arXiv preprint arXiv:1806.10729. Justesen, N., Torrado, R. R., Bontrager, P., Khalifa, A., Togelius, J., & Risi, S. (2018). Illuminating generalization in deep reinforcement learning through procedural level generation. arXiv preprint arXiv:​1806.​10729.
Zurück zum Zitat Kang, G., Jiang, L., Wei, Y., Yang, Y., & Hauptmann, A. G. (2020). Contrastive adaptation network for single-and multi-source domain adaptation. TPAMI. Kang, G., Jiang, L., Wei, Y., Yang, Y., & Hauptmann, A. G. (2020). Contrastive adaptation network for single-and multi-source domain adaptation. TPAMI.
Zurück zum Zitat Kostrikov, I., Yarats, D., & Fergus, R. (2021). Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. In: ICLR. Kostrikov, I., Yarats, D., & Fergus, R. (2021). Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. In: ICLR.
Zurück zum Zitat Krizhevsky, A., Sutskever, I., & Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. In: NeurIPS. Krizhevsky, A., Sutskever, I., & Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. In: NeurIPS.
Zurück zum Zitat Kushibar, K., & Jouide, S. Cancer radiomics extraction and selection pipeline. Kushibar, K., & Jouide, S. Cancer radiomics extraction and selection pipeline.
Zurück zum Zitat Laskin, M., Lee, K., Stooke, A., Pinto, L., Abbeel, P., & Srinivas, A. (2020). Reinforcement learning with augmented data. In: NeurIPS. Laskin, M., Lee, K., Stooke, A., Pinto, L., Abbeel, P., & Srinivas, A. (2020). Reinforcement learning with augmented data. In: NeurIPS.
Zurück zum Zitat Lee, C.Y., Batra, T., Baig, M. H., & Ulbricht, D. (2019). Sliced wasserstein discrepancy for unsupervised domain adaptation. In: CVPR. Lee, C.Y., Batra, T., Baig, M. H., & Ulbricht, D. (2019). Sliced wasserstein discrepancy for unsupervised domain adaptation. In: CVPR.
Zurück zum Zitat Lee, K., Lee, K., Shin, J., & Lee, H. (2020). Network randomization: A simple technique for generalization in deep reinforcement learning. In: ICLR. Lee, K., Lee, K., Shin, J., & Lee, H. (2020). Network randomization: A simple technique for generalization in deep reinforcement learning. In: ICLR.
Zurück zum Zitat Li, D., Yang, Y., Song, Y.Z., & Hospedales, T. M. (2017). Deeper, broader and artier domain generalization. In: ICCV. Li, D., Yang, Y., Song, Y.Z., & Hospedales, T. M. (2017). Deeper, broader and artier domain generalization. In: ICCV.
Zurück zum Zitat Li, D., Yang, Y., Song, Y.Z., & Hospedales, T. M. (2018). Learning to generalize: Meta-learning for domain generalization. In: AAAI. Li, D., Yang, Y., Song, Y.Z., & Hospedales, T. M. (2018). Learning to generalize: Meta-learning for domain generalization. In: AAAI.
Zurück zum Zitat Li, D., Zhang, J., Yang, Y., Liu, C., Song, Y.Z., & Hospedales, T. M. (2019). Episodic training for domain generalization. In: ICCV. Li, D., Zhang, J., Yang, Y., Liu, C., Song, Y.Z., & Hospedales, T. M. (2019). Episodic training for domain generalization. In: ICCV.
Zurück zum Zitat Li, H., Jialin Pan, S., Wang, S., & Kot, A. C. (2018). Domain generalization with adversarial feature learning. In: CVPR. Li, H., Jialin Pan, S., Wang, S., & Kot, A. C. (2018). Domain generalization with adversarial feature learning. In: CVPR.
Zurück zum Zitat Li, Y., Tiana, X., Gong, M., Liu, Y., Liu, T., Zhang, K., & Tao, D. (2018). Deep domain generalization via conditional invariant adversarial networks. In: ECCV. Li, Y., Tiana, X., Gong, M., Liu, Y., Liu, T., Zhang, K., & Tao, D. (2018). Deep domain generalization via conditional invariant adversarial networks. In: ECCV.
Zurück zum Zitat Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C.L. (2014). Microsoft coco: Common objects in context. In: ECCV. Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C.L. (2014). Microsoft coco: Common objects in context. In: ECCV.
Zurück zum Zitat Liu, H., Simonyan, K., & Yang, Y. (2019). Darts: Differentiable architecture search. In: ICLR. Liu, H., Simonyan, K., & Yang, Y. (2019). Darts: Differentiable architecture search. In: ICLR.
Zurück zum Zitat Liu, Z., Miao, Z., Pan, X., Zhan, X., Lin, D., Yu, S. X., & Gong, B. (2020). Open compound domain adaptation. In: CVPR. Liu, Z., Miao, Z., Pan, X., Zhan, X., Lin, D., Yu, S. X., & Gong, B. (2020). Open compound domain adaptation. In: CVPR.
Zurück zum Zitat Long, M., Cao, Y., Wang, J., & Jordan, M. I. (2015). Learning transferable features with deep adaptation networks. In: ICML. Long, M., Cao, Y., Wang, J., & Jordan, M. I. (2015). Learning transferable features with deep adaptation networks. In: ICML.
Zurück zum Zitat Long, M., Zhu, H., Wang, J., & Jordan, M. I. (2016). Unsupervised domain adaptation with residual transfer networks. In: NeurIPS. Long, M., Zhu, H., Wang, J., & Jordan, M. I. (2016). Unsupervised domain adaptation with residual transfer networks. In: NeurIPS.
Zurück zum Zitat Loshchilov, I., & Hutter, F. (2017). Sgdr: Stochastic gradient descent with warm restarts. In: ICLR. Loshchilov, I., & Hutter, F. (2017). Sgdr: Stochastic gradient descent with warm restarts. In: ICLR.
Zurück zum Zitat Lu, Z., Yang, Y., Zhu, X., Liu, C., Song, Y. Z., & Xiang, T. (2020). Stochastic classifiers for unsupervised domain adaptation. In: CVPR. Lu, Z., Yang, Y., Zhu, X., Liu, C., Song, Y. Z., & Xiang, T. (2020). Stochastic classifiers for unsupervised domain adaptation. In: CVPR.
Zurück zum Zitat Maaten, L.v.d., & Hinton, G. (2008). Visualizing data using t-sne. JMLR. Maaten, L.v.d., & Hinton, G. (2008). Visualizing data using t-sne. JMLR.
Zurück zum Zitat Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:​1312.​5602.
Zurück zum Zitat Motiian, S., Piccirilli, M., Adjeroh, D. A., & Doretto, G. (2017). Unified deep supervised domain adaptation and generalization. In: ICCV. Motiian, S., Piccirilli, M., Adjeroh, D. A., & Doretto, G. (2017). Unified deep supervised domain adaptation and generalization. In: ICCV.
Zurück zum Zitat Muandet, K., Balduzzi, D., & Scholkopf, B. (2013). Domain generalization via invariant feature representation. In: ICML. Muandet, K., Balduzzi, D., & Scholkopf, B. (2013). Domain generalization via invariant feature representation. In: ICML.
Zurück zum Zitat Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko K., & Wang, B. (2019). Moment matching for multi-source domain adaptation. In: ICCV. Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko K., & Wang, B. (2019). Moment matching for multi-source domain adaptation. In: ICCV.
Zurück zum Zitat Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., & Saenko, K. (2017). Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924. Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., & Saenko, K. (2017). Visda: The visual domain adaptation challenge. arXiv preprint arXiv:​1710.​06924.
Zurück zum Zitat Ristani, E., Solera, F., Zou, R., Cucchiara, R., & Tomasi, C. (2016). Performance measures and a data set for multi-target, multi-camera tracking. In: ECCV. Ristani, E., Solera, F., Zou, R., Cucchiara, R., & Tomasi, C. (2016). Performance measures and a data set for multi-target, multi-camera tracking. In: ECCV.
Zurück zum Zitat Saito, K., Watanabe, K., Ushiku, Y., & Harada, T. (2018). Maximum classifier discrepancy for unsupervised domain adaptation. In: CVPR. Saito, K., Watanabe, K., Ushiku, Y., & Harada, T. (2018). Maximum classifier discrepancy for unsupervised domain adaptation. In: CVPR.
Zurück zum Zitat Schmid, F., Masoudian, S., Koutini, K., & Widmer, G. (2022). Cp-jku submission to dcase22: Distilling knowledge for low-complexity convolutional neural networks from a patchout audio transformer. Tech. rep., DCASE2022 Challenge, Tech. Rep. Schmid, F., Masoudian, S., Koutini, K., & Widmer, G. (2022). Cp-jku submission to dcase22: Distilling knowledge for low-complexity convolutional neural networks from a patchout audio transformer. Tech. rep., DCASE2022 Challenge, Tech. Rep.
Zurück zum Zitat Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:​1707.​06347.
Zurück zum Zitat Segu, M., Tonioni, A., & Tombari, F. (2023). Batch normalization embeddings for deep domain generalization. Pattern Recognition, 135, 109115.CrossRef Segu, M., Tonioni, A., & Tombari, F. (2023). Batch normalization embeddings for deep domain generalization. Pattern Recognition, 135, 109115.CrossRef
Zurück zum Zitat Seo, S., Suh, Y., Kim, D., Kim, G., Han, J., & Han, B. (2020). Learning to optimize domain specific normalization for domain generalization. In: ECCV. Seo, S., Suh, Y., Kim, D., Kim, G., Han, J., & Han, B. (2020). Learning to optimize domain specific normalization for domain generalization. In: ECCV.
Zurück zum Zitat Shankar, S., Piratla, V., Chakrabarti, S., Chaudhuri, S., Jyothi, P., & Sarawagi, S. (2018). Generalizing across domains via cross-gradient training. In: ICLR. Shankar, S., Piratla, V., Chakrabarti, S., Chaudhuri, S., Jyothi, P., & Sarawagi, S. (2018). Generalizing across domains via cross-gradient training. In: ICLR.
Zurück zum Zitat Sohn, K., Berthelot, D., Li, C.L., Zhang, Z., Carlini, N., Cubuk, E. D., Kurakin, A., Zhang, H. & Raffel, C. (2020). Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In: NeurIPS. Sohn, K., Berthelot, D., Li, C.L., Zhang, Z., Carlini, N., Cubuk, E. D., Kurakin, A., Zhang, H. & Raffel, C. (2020). Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In: NeurIPS.
Zurück zum Zitat Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. JMLR. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. JMLR.
Zurück zum Zitat Sun, B., & Saenko, K. (2016). Deep coral: Correlation alignment for deep domain adaptation. In: ECCV. Sun, B., & Saenko, K. (2016). Deep coral: Correlation alignment for deep domain adaptation. In: ECCV.
Zurück zum Zitat Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In: CVPR. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In: CVPR.
Zurück zum Zitat Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., & Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. In: IROS. Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., & Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. In: IROS.
Zurück zum Zitat Tzeng, E., Hoffman, J., Saenko, K., & Darrell, T. (2017). Adversarial discriminative domain adaptation. In: CVPR. Tzeng, E., Hoffman, J., Saenko, K., & Darrell, T. (2017). Adversarial discriminative domain adaptation. In: CVPR.
Zurück zum Zitat Ulyanov, D., Vedaldi, A., & Lempitsky, V. (2016). Instance normalization: The missing ingredient for fast stylization. arXiv:1607.08022. Ulyanov, D., Vedaldi, A., & Lempitsky, V. (2016). Instance normalization: The missing ingredient for fast stylization. arXiv:​1607.​08022.
Zurück zum Zitat Venkateswara, H., Eusebio, J., Chakraborty, S., & Panchanathan, S. (2017). Deep hashing network for unsupervised domain adaptation. In: CVPR. Venkateswara, H., Eusebio, J., Chakraborty, S., & Panchanathan, S. (2017). Deep hashing network for unsupervised domain adaptation. In: CVPR.
Zurück zum Zitat Verma, V., Lamb, A., Beckham, C., Najafi, A., Mitliagkas, I., Lopez-Paz, D., & Bengio, Y. (2019). Manifold mixup: Better representations by interpolating hidden states. In: ICML. Verma, V., Lamb, A., Beckham, C., Najafi, A., Mitliagkas, I., Lopez-Paz, D., & Bengio, Y. (2019). Manifold mixup: Better representations by interpolating hidden states. In: ICML.
Zurück zum Zitat Wang, H., Xu, M., Ni, B., & Zhang, W. (2020). Learning to combine: Knowledge aggregation for multi-source domain adaptation. In: ECCV. Wang, H., Xu, M., Ni, B., & Zhang, W. (2020). Learning to combine: Knowledge aggregation for multi-source domain adaptation. In: ECCV.
Zurück zum Zitat Wang, S., Yu, L., Li, C., Fu, C.W., & Heng, P.A. (2020). Learning from extrinsic and intrinsic supervisions for domain generalization. In: ECCV. Wang, S., Yu, L., Li, C., Fu, C.W., & Heng, P.A. (2020). Learning from extrinsic and intrinsic supervisions for domain generalization. In: ECCV.
Zurück zum Zitat Xu, R., Chen, Z., Zuo, W., Yan, J., & Lin, L. (2018). Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In: CVPR. Xu, R., Chen, Z., Zuo, W., Yan, J., & Lin, L. (2018). Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In: CVPR.
Zurück zum Zitat Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., & Yoo, Y. (2019). Cutmix: Regularization strategy to train strong classifiers with localizable features. In: ICCV. Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., & Yoo, Y. (2019). Cutmix: Regularization strategy to train strong classifiers with localizable features. In: ICCV.
Zurück zum Zitat Zhang, C., Vinyals, O., Munos, R., & Bengio, S. (2018). A study on overfitting in deep reinforcement learning. arXiv preprint arXiv:1804.06893. Zhang, C., Vinyals, O., Munos, R., & Bengio, S. (2018). A study on overfitting in deep reinforcement learning. arXiv preprint arXiv:​1804.​06893.
Zurück zum Zitat Zhang, H., Cisse, M., Dauphin, Y.N., & Lopez-Paz, D. (2018). Mixup: Beyond empirical risk minimization. In: ICLR. Zhang, H., Cisse, M., Dauphin, Y.N., & Lopez-Paz, D. (2018). Mixup: Beyond empirical risk minimization. In: ICLR.
Zurück zum Zitat Zhao, H., Zhang, S., Wu, G., Moura, J. M., Costeira, J. P., & Gordon, G. J. (2018). Adversarial multiple source domain adaptation. In: NeurIPS. Zhao, H., Zhang, S., Wu, G., Moura, J. M., Costeira, J. P., & Gordon, G. J. (2018). Adversarial multiple source domain adaptation. In: NeurIPS.
Zurück zum Zitat Zhao, S., Wang, G., Zhang, S., Gu, Y., Li, Y., Song, Z., Xu, P., Hu, R., Chai, H., & Keutzer, K. (2020). Multi-source distilling domain adaptation. In: AAAI. Zhao, S., Wang, G., Zhang, S., Gu, Y., Li, Y., Song, Z., Xu, P., Hu, R., Chai, H., & Keutzer, K. (2020). Multi-source distilling domain adaptation. In: AAAI.
Zurück zum Zitat Zhao, Y., Zhong, Z., Luo, Z., Lee, G.H., & Sebe, N. (2021). Source-free open compound domain adaptation in semantic segmentation. arXiv preprint arXiv:2106.03422. Zhao, Y., Zhong, Z., Luo, Z., Lee, G.H., & Sebe, N. (2021). Source-free open compound domain adaptation in semantic segmentation. arXiv preprint arXiv:​2106.​03422.
Zurück zum Zitat Zhao, Z., Wu, Z., Wu, X., Zhang, C., & Wang, S. (2022). Crossmodal few-shot 3d point cloud semantic segmentation. In: Proceedings of the 30th ACM international conference on multimedia (pp. 4760–4768). Zhao, Z., Wu, Z., Wu, X., Zhang, C., & Wang, S. (2022). Crossmodal few-shot 3d point cloud semantic segmentation. In: Proceedings of the 30th ACM international conference on multimedia (pp. 4760–4768).
Zurück zum Zitat Zheng, L., Shen, L., Tian, L., Wang, S., Wang, J., & Tian, Q. (2015). Scalable person re-identification: A benchmark. In: ICCV. Zheng, L., Shen, L., Tian, L., Wang, S., Wang, J., & Tian, Q. (2015). Scalable person re-identification: A benchmark. In: ICCV.
Zurück zum Zitat Zheng, Z., Zheng, L., & Yang, Y. (2017). Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In: ICCV. Zheng, Z., Zheng, L., & Yang, Y. (2017). Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In: ICCV.
Zurück zum Zitat Zhong, Z., Zheng, L., Kang, G., Li, S., & Yang, Y. (2020). Random erasing data augmentation. In: AAAI. Zhong, Z., Zheng, L., Kang, G., Li, S., & Yang, Y. (2020). Random erasing data augmentation. In: AAAI.
Zurück zum Zitat Zhou, K., Loy, C. C., & Liu, Z. (2021). Semi-supervised domain generalization with stochastic stylematch. arXiv preprint arXiv:2106.00592. Zhou, K., Loy, C. C., & Liu, Z. (2021). Semi-supervised domain generalization with stochastic stylematch. arXiv preprint arXiv:​2106.​00592.
Zurück zum Zitat Zhou, K., & Xiang, T. (2019). Torchreid: A library for deep learning person re-identification in pytorch. arXiv preprint arXiv:1910.10093. Zhou, K., & Xiang, T. (2019). Torchreid: A library for deep learning person re-identification in pytorch. arXiv preprint arXiv:​1910.​10093.
Zurück zum Zitat Zhou, K., Yang, Y., Cavallaro, A., & Xiang, T. (2021). Learning generalisable omni-scale representations for person re-identification. TPAMI. Zhou, K., Yang, Y., Cavallaro, A., & Xiang, T. (2021). Learning generalisable omni-scale representations for person re-identification. TPAMI.
Zurück zum Zitat Zhou, K., Yang, Y., Hospedales, T. & Xiang, T. (2020). Learning to generate novel domains for domain generalization. In: ECCV. Zhou, K., Yang, Y., Hospedales, T. & Xiang, T. (2020). Learning to generate novel domains for domain generalization. In: ECCV.
Zurück zum Zitat Zhou, K., Yang, Y., Hospedales, T.M. & Xiang, T. (2020a). Deep domain-adversarial image generation for domain generalisation. In: AAAI. Zhou, K., Yang, Y., Hospedales, T.M. & Xiang, T. (2020a). Deep domain-adversarial image generation for domain generalisation. In: AAAI.
Zurück zum Zitat Zhou, K., Yang, Y., Qiao, Y., & Xiang, T. (2021). Domain generalization with mixstyle. In: ICLR. Zhou, K., Yang, Y., Qiao, Y., & Xiang, T. (2021). Domain generalization with mixstyle. In: ICLR.
Metadaten
Titel
MixStyle Neural Networks for Domain Generalization and Adaptation
verfasst von
Kaiyang Zhou
Yongxin Yang
Yu Qiao
Tao Xiang
Publikationsdatum
17.10.2023
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 3/2024
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-023-01913-8

Weitere Artikel der Ausgabe 3/2024

International Journal of Computer Vision 3/2024 Zur Ausgabe

Premium Partner