Skip to main content
Top
Published in: International Journal of Computer Vision 3/2024

17-10-2023

MixStyle Neural Networks for Domain Generalization and Adaptation

Authors: Kaiyang Zhou, Yongxin Yang, Yu Qiao, Tao Xiang

Published in: International Journal of Computer Vision | Issue 3/2024

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Neural networks do not generalize well to unseen data with domain shifts—a longstanding problem in machine learning and AI. To overcome the problem, we propose MixStyle, a simple plug-and-play, parameter-free module that can improve domain generalization performance without the need to collect more data or increase model capacity. The design of MixStyle is simple: it mixes the feature statistics of two random instances in a single forward pass during training. The idea is grounded by the finding from recent style transfer research that feature statistics capture image style information, which essentially defines visual domains. Therefore, mixing feature statistics can be seen as an efficient way to synthesize new domains in the feature space, thus achieving data augmentation. MixStyle is easy to implement with a few lines of code, does not require modification to training objectives, and can fit a variety of learning paradigms including supervised domain generalization, semi-supervised domain generalization, and unsupervised domain adaptation. Our experiments show that MixStyle can significantly boost out-of-distribution generalization performance across a wide range of tasks including image recognition, instance retrieval and reinforcement learning. The source code is released at https://​github.​com/​KaiyangZhou/​mixstyle-release.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Footnotes
3
We follow the original train/test split in each dataset for evaluation.
 
6
We do not use batch normalization or dropout because they are detrimental to the performance, as suggested by Igl et al. (Igl et al., 2019).
 
8
Note that for fair comparison we only select the baselines that share a similar implementation including the model architecture.
 
Literature
go back to reference Balaji, Y., Sankaranarayanan, S., & Chellappa, R. (2018). Metareg: Towards domain generalization using meta-regularization. In: NeurIPS. Balaji, Y., Sankaranarayanan, S., & Chellappa, R. (2018). Metareg: Towards domain generalization using meta-regularization. In: NeurIPS.
go back to reference Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., & Vaughan, J. W. (2010). A theory of learning from different domains. ML. Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., & Vaughan, J. W. (2010). A theory of learning from different domains. ML.
go back to reference Blanchard, G., Lee, G., & Scott, C. (2011). Generalizing from several related classification tasks to a new unlabeled sample. In: NeurIPS. Blanchard, G., Lee, G., & Scott, C. (2011). Generalizing from several related classification tasks to a new unlabeled sample. In: NeurIPS.
go back to reference Carlucci, F. M., D’Innocente, A., Bucci, S., Caputo, B., & Tommasi, T. (2019). Domain generalization by solving jigsaw puzzles. In: CVPR. Carlucci, F. M., D’Innocente, A., Bucci, S., Caputo, B., & Tommasi, T. (2019). Domain generalization by solving jigsaw puzzles. In: CVPR.
go back to reference Cobbe, K., Klimov, O., Hesse, C., Kim, T., & Schulman, J. (2019). Quantifying generalization in reinforcement learning. In: ICML. Cobbe, K., Klimov, O., Hesse, C., Kim, T., & Schulman, J. (2019). Quantifying generalization in reinforcement learning. In: ICML.
go back to reference Cubuk, E. D., Zoph, B., Shlens, J., & Le, Q. V. (2019). Randaugment: Practical data augmentation with no separate search. arXiv preprint arXiv:1909.13719. Cubuk, E. D., Zoph, B., Shlens, J., & Le, Q. V. (2019). Randaugment: Practical data augmentation with no separate search. arXiv preprint arXiv:​1909.​13719.
go back to reference Deng, Z., Luo, Y., & Zhu, J. (2019). Cluster alignment with a teacher for unsupervised domain adaptation. In: ICCV. Deng, Z., Luo, Y., & Zhu, J. (2019). Cluster alignment with a teacher for unsupervised domain adaptation. In: ICCV.
go back to reference DeVries, T., & Taylor, G.W. (2017). Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552. DeVries, T., & Taylor, G.W. (2017). Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:​1708.​04552.
go back to reference DeVries, T., & Taylor, G.W. (2017). Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552. DeVries, T., & Taylor, G.W. (2017). Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:​1708.​04552.
go back to reference Ding, Z., & Fu, Y. (2017). Deep domain generalization with structured low-rank constraint. TIP. Ding, Z., & Fu, Y. (2017). Deep domain generalization with structured low-rank constraint. TIP.
go back to reference Dou, Q., Castro, D. C., Kamnitsas, K., & Glocker, B. (2019). Domain generalization via model-agnostic learning of semantic features. In: NeurIPS. Dou, Q., Castro, D. C., Kamnitsas, K., & Glocker, B. (2019). Domain generalization via model-agnostic learning of semantic features. In: NeurIPS.
go back to reference Dumoulin, V., Shlens, J., & Kudlur, M. (2017). A learned representation for artistic style. In: ICLR. Dumoulin, V., Shlens, J., & Kudlur, M. (2017). A learned representation for artistic style. In: ICLR.
go back to reference Espeholt, L., Soyer, H., Munos, R., Simonyan, K., Mnih, V., Ward, T., Doron, Y., Firoiu, V., Harley, T., & Dunning, I., et al. (2018). Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In: ICML. Espeholt, L., Soyer, H., Munos, R., Simonyan, K., Mnih, V., Ward, T., Doron, Y., Firoiu, V., Harley, T., & Dunning, I., et al. (2018). Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In: ICML.
go back to reference Fan, Q., Segu, M., Tai, Y. W., Yu, F., Tang, C. K., Schiele, B., & Dai, D. (2022). Normalization perturbation: A simple domain generalization method for real-world domain shifts. arXiv preprint arXiv:2211.04393. Fan, Q., Segu, M., Tai, Y. W., Yu, F., Tang, C. K., Schiele, B., & Dai, D. (2022). Normalization perturbation: A simple domain generalization method for real-world domain shifts. arXiv preprint arXiv:​2211.​04393.
go back to reference Gamrian, S., & Goldberg, Y. (2019). Transfer learning for related reinforcement learning tasks via image-to-image translation. In: ICML. Gamrian, S., & Goldberg, Y. (2019). Transfer learning for related reinforcement learning tasks via image-to-image translation. In: ICML.
go back to reference Ganin, Y., & Lempitsky, V. S. (2015). Unsupervised domain adaptation by backpropagation. In: ICML. Ganin, Y., & Lempitsky, V. S. (2015). Unsupervised domain adaptation by backpropagation. In: ICML.
go back to reference Ghiasi, G., Lin, T. Y., & Le, Q. V. (2018). Dropblock: A regularization method for convolutional networks. In: NeurIPS. Ghiasi, G., Lin, T. Y., & Le, Q. V. (2018). Dropblock: A regularization method for convolutional networks. In: NeurIPS.
go back to reference Gong, R., Li, W., Chen, Y., & Van Gool, L. (2019). Dlow: Domain flow for adaptation and generalization. In: CVPR. Gong, R., Li, W., Chen, Y., & Van Gool, L. (2019). Dlow: Domain flow for adaptation and generalization. In: CVPR.
go back to reference He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In: CVPR. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In: CVPR.
go back to reference Hoffman, J., Tzeng, E., Park, T., Zhu, J. Y., Isola, P., Saenko, K., Efros, A., & Darrell, T. (2018). Cycada: Cycle-consistent adversarial domain adaptation. In: ICML. Hoffman, J., Tzeng, E., Park, T., Zhu, J. Y., Isola, P., Saenko, K., Efros, A., & Darrell, T. (2018). Cycada: Cycle-consistent adversarial domain adaptation. In: ICML.
go back to reference Huang, X., & Belongie, S. (2017). Arbitrary style transfer in real-time with adaptive instance normalization. In: ICCV. Huang, X., & Belongie, S. (2017). Arbitrary style transfer in real-time with adaptive instance normalization. In: ICCV.
go back to reference Huynh, S. V. (2021). A strong baseline for vehicle re-identification. In: CVPR-W. Huynh, S. V. (2021). A strong baseline for vehicle re-identification. In: CVPR-W.
go back to reference Igl, M., Ciosek, K., Li, Y., Tschiatschek, S., Zhang, C., Devlin, S., & Hofmann, K. (2019). Generalization in reinforcement learning with selective noise injection and information bottleneck. In: NeurIPS. Igl, M., Ciosek, K., Li, Y., Tschiatschek, S., Zhang, C., Devlin, S., & Hofmann, K. (2019). Generalization in reinforcement learning with selective noise injection and information bottleneck. In: NeurIPS.
go back to reference Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: ICML. Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: ICML.
go back to reference Justesen, N., Torrado, R. R., Bontrager, P., Khalifa, A., Togelius, J., & Risi, S. (2018). Illuminating generalization in deep reinforcement learning through procedural level generation. arXiv preprint arXiv:1806.10729. Justesen, N., Torrado, R. R., Bontrager, P., Khalifa, A., Togelius, J., & Risi, S. (2018). Illuminating generalization in deep reinforcement learning through procedural level generation. arXiv preprint arXiv:​1806.​10729.
go back to reference Kang, G., Jiang, L., Wei, Y., Yang, Y., & Hauptmann, A. G. (2020). Contrastive adaptation network for single-and multi-source domain adaptation. TPAMI. Kang, G., Jiang, L., Wei, Y., Yang, Y., & Hauptmann, A. G. (2020). Contrastive adaptation network for single-and multi-source domain adaptation. TPAMI.
go back to reference Kostrikov, I., Yarats, D., & Fergus, R. (2021). Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. In: ICLR. Kostrikov, I., Yarats, D., & Fergus, R. (2021). Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. In: ICLR.
go back to reference Krizhevsky, A., Sutskever, I., & Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. In: NeurIPS. Krizhevsky, A., Sutskever, I., & Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. In: NeurIPS.
go back to reference Kushibar, K., & Jouide, S. Cancer radiomics extraction and selection pipeline. Kushibar, K., & Jouide, S. Cancer radiomics extraction and selection pipeline.
go back to reference Laskin, M., Lee, K., Stooke, A., Pinto, L., Abbeel, P., & Srinivas, A. (2020). Reinforcement learning with augmented data. In: NeurIPS. Laskin, M., Lee, K., Stooke, A., Pinto, L., Abbeel, P., & Srinivas, A. (2020). Reinforcement learning with augmented data. In: NeurIPS.
go back to reference Lee, C.Y., Batra, T., Baig, M. H., & Ulbricht, D. (2019). Sliced wasserstein discrepancy for unsupervised domain adaptation. In: CVPR. Lee, C.Y., Batra, T., Baig, M. H., & Ulbricht, D. (2019). Sliced wasserstein discrepancy for unsupervised domain adaptation. In: CVPR.
go back to reference Lee, K., Lee, K., Shin, J., & Lee, H. (2020). Network randomization: A simple technique for generalization in deep reinforcement learning. In: ICLR. Lee, K., Lee, K., Shin, J., & Lee, H. (2020). Network randomization: A simple technique for generalization in deep reinforcement learning. In: ICLR.
go back to reference Li, D., Yang, Y., Song, Y.Z., & Hospedales, T. M. (2017). Deeper, broader and artier domain generalization. In: ICCV. Li, D., Yang, Y., Song, Y.Z., & Hospedales, T. M. (2017). Deeper, broader and artier domain generalization. In: ICCV.
go back to reference Li, D., Yang, Y., Song, Y.Z., & Hospedales, T. M. (2018). Learning to generalize: Meta-learning for domain generalization. In: AAAI. Li, D., Yang, Y., Song, Y.Z., & Hospedales, T. M. (2018). Learning to generalize: Meta-learning for domain generalization. In: AAAI.
go back to reference Li, D., Zhang, J., Yang, Y., Liu, C., Song, Y.Z., & Hospedales, T. M. (2019). Episodic training for domain generalization. In: ICCV. Li, D., Zhang, J., Yang, Y., Liu, C., Song, Y.Z., & Hospedales, T. M. (2019). Episodic training for domain generalization. In: ICCV.
go back to reference Li, H., Jialin Pan, S., Wang, S., & Kot, A. C. (2018). Domain generalization with adversarial feature learning. In: CVPR. Li, H., Jialin Pan, S., Wang, S., & Kot, A. C. (2018). Domain generalization with adversarial feature learning. In: CVPR.
go back to reference Li, Y., Tiana, X., Gong, M., Liu, Y., Liu, T., Zhang, K., & Tao, D. (2018). Deep domain generalization via conditional invariant adversarial networks. In: ECCV. Li, Y., Tiana, X., Gong, M., Liu, Y., Liu, T., Zhang, K., & Tao, D. (2018). Deep domain generalization via conditional invariant adversarial networks. In: ECCV.
go back to reference Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C.L. (2014). Microsoft coco: Common objects in context. In: ECCV. Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C.L. (2014). Microsoft coco: Common objects in context. In: ECCV.
go back to reference Liu, H., Simonyan, K., & Yang, Y. (2019). Darts: Differentiable architecture search. In: ICLR. Liu, H., Simonyan, K., & Yang, Y. (2019). Darts: Differentiable architecture search. In: ICLR.
go back to reference Liu, Z., Miao, Z., Pan, X., Zhan, X., Lin, D., Yu, S. X., & Gong, B. (2020). Open compound domain adaptation. In: CVPR. Liu, Z., Miao, Z., Pan, X., Zhan, X., Lin, D., Yu, S. X., & Gong, B. (2020). Open compound domain adaptation. In: CVPR.
go back to reference Long, M., Cao, Y., Wang, J., & Jordan, M. I. (2015). Learning transferable features with deep adaptation networks. In: ICML. Long, M., Cao, Y., Wang, J., & Jordan, M. I. (2015). Learning transferable features with deep adaptation networks. In: ICML.
go back to reference Long, M., Zhu, H., Wang, J., & Jordan, M. I. (2016). Unsupervised domain adaptation with residual transfer networks. In: NeurIPS. Long, M., Zhu, H., Wang, J., & Jordan, M. I. (2016). Unsupervised domain adaptation with residual transfer networks. In: NeurIPS.
go back to reference Loshchilov, I., & Hutter, F. (2017). Sgdr: Stochastic gradient descent with warm restarts. In: ICLR. Loshchilov, I., & Hutter, F. (2017). Sgdr: Stochastic gradient descent with warm restarts. In: ICLR.
go back to reference Lu, Z., Yang, Y., Zhu, X., Liu, C., Song, Y. Z., & Xiang, T. (2020). Stochastic classifiers for unsupervised domain adaptation. In: CVPR. Lu, Z., Yang, Y., Zhu, X., Liu, C., Song, Y. Z., & Xiang, T. (2020). Stochastic classifiers for unsupervised domain adaptation. In: CVPR.
go back to reference Maaten, L.v.d., & Hinton, G. (2008). Visualizing data using t-sne. JMLR. Maaten, L.v.d., & Hinton, G. (2008). Visualizing data using t-sne. JMLR.
go back to reference Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:​1312.​5602.
go back to reference Motiian, S., Piccirilli, M., Adjeroh, D. A., & Doretto, G. (2017). Unified deep supervised domain adaptation and generalization. In: ICCV. Motiian, S., Piccirilli, M., Adjeroh, D. A., & Doretto, G. (2017). Unified deep supervised domain adaptation and generalization. In: ICCV.
go back to reference Muandet, K., Balduzzi, D., & Scholkopf, B. (2013). Domain generalization via invariant feature representation. In: ICML. Muandet, K., Balduzzi, D., & Scholkopf, B. (2013). Domain generalization via invariant feature representation. In: ICML.
go back to reference Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko K., & Wang, B. (2019). Moment matching for multi-source domain adaptation. In: ICCV. Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko K., & Wang, B. (2019). Moment matching for multi-source domain adaptation. In: ICCV.
go back to reference Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., & Saenko, K. (2017). Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924. Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., & Saenko, K. (2017). Visda: The visual domain adaptation challenge. arXiv preprint arXiv:​1710.​06924.
go back to reference Ristani, E., Solera, F., Zou, R., Cucchiara, R., & Tomasi, C. (2016). Performance measures and a data set for multi-target, multi-camera tracking. In: ECCV. Ristani, E., Solera, F., Zou, R., Cucchiara, R., & Tomasi, C. (2016). Performance measures and a data set for multi-target, multi-camera tracking. In: ECCV.
go back to reference Saito, K., Watanabe, K., Ushiku, Y., & Harada, T. (2018). Maximum classifier discrepancy for unsupervised domain adaptation. In: CVPR. Saito, K., Watanabe, K., Ushiku, Y., & Harada, T. (2018). Maximum classifier discrepancy for unsupervised domain adaptation. In: CVPR.
go back to reference Schmid, F., Masoudian, S., Koutini, K., & Widmer, G. (2022). Cp-jku submission to dcase22: Distilling knowledge for low-complexity convolutional neural networks from a patchout audio transformer. Tech. rep., DCASE2022 Challenge, Tech. Rep. Schmid, F., Masoudian, S., Koutini, K., & Widmer, G. (2022). Cp-jku submission to dcase22: Distilling knowledge for low-complexity convolutional neural networks from a patchout audio transformer. Tech. rep., DCASE2022 Challenge, Tech. Rep.
go back to reference Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:​1707.​06347.
go back to reference Segu, M., Tonioni, A., & Tombari, F. (2023). Batch normalization embeddings for deep domain generalization. Pattern Recognition, 135, 109115.CrossRef Segu, M., Tonioni, A., & Tombari, F. (2023). Batch normalization embeddings for deep domain generalization. Pattern Recognition, 135, 109115.CrossRef
go back to reference Seo, S., Suh, Y., Kim, D., Kim, G., Han, J., & Han, B. (2020). Learning to optimize domain specific normalization for domain generalization. In: ECCV. Seo, S., Suh, Y., Kim, D., Kim, G., Han, J., & Han, B. (2020). Learning to optimize domain specific normalization for domain generalization. In: ECCV.
go back to reference Shankar, S., Piratla, V., Chakrabarti, S., Chaudhuri, S., Jyothi, P., & Sarawagi, S. (2018). Generalizing across domains via cross-gradient training. In: ICLR. Shankar, S., Piratla, V., Chakrabarti, S., Chaudhuri, S., Jyothi, P., & Sarawagi, S. (2018). Generalizing across domains via cross-gradient training. In: ICLR.
go back to reference Sohn, K., Berthelot, D., Li, C.L., Zhang, Z., Carlini, N., Cubuk, E. D., Kurakin, A., Zhang, H. & Raffel, C. (2020). Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In: NeurIPS. Sohn, K., Berthelot, D., Li, C.L., Zhang, Z., Carlini, N., Cubuk, E. D., Kurakin, A., Zhang, H. & Raffel, C. (2020). Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In: NeurIPS.
go back to reference Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. JMLR. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. JMLR.
go back to reference Sun, B., & Saenko, K. (2016). Deep coral: Correlation alignment for deep domain adaptation. In: ECCV. Sun, B., & Saenko, K. (2016). Deep coral: Correlation alignment for deep domain adaptation. In: ECCV.
go back to reference Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In: CVPR. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In: CVPR.
go back to reference Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., & Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. In: IROS. Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., & Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. In: IROS.
go back to reference Tzeng, E., Hoffman, J., Saenko, K., & Darrell, T. (2017). Adversarial discriminative domain adaptation. In: CVPR. Tzeng, E., Hoffman, J., Saenko, K., & Darrell, T. (2017). Adversarial discriminative domain adaptation. In: CVPR.
go back to reference Ulyanov, D., Vedaldi, A., & Lempitsky, V. (2016). Instance normalization: The missing ingredient for fast stylization. arXiv:1607.08022. Ulyanov, D., Vedaldi, A., & Lempitsky, V. (2016). Instance normalization: The missing ingredient for fast stylization. arXiv:​1607.​08022.
go back to reference Venkateswara, H., Eusebio, J., Chakraborty, S., & Panchanathan, S. (2017). Deep hashing network for unsupervised domain adaptation. In: CVPR. Venkateswara, H., Eusebio, J., Chakraborty, S., & Panchanathan, S. (2017). Deep hashing network for unsupervised domain adaptation. In: CVPR.
go back to reference Verma, V., Lamb, A., Beckham, C., Najafi, A., Mitliagkas, I., Lopez-Paz, D., & Bengio, Y. (2019). Manifold mixup: Better representations by interpolating hidden states. In: ICML. Verma, V., Lamb, A., Beckham, C., Najafi, A., Mitliagkas, I., Lopez-Paz, D., & Bengio, Y. (2019). Manifold mixup: Better representations by interpolating hidden states. In: ICML.
go back to reference Wang, H., Xu, M., Ni, B., & Zhang, W. (2020). Learning to combine: Knowledge aggregation for multi-source domain adaptation. In: ECCV. Wang, H., Xu, M., Ni, B., & Zhang, W. (2020). Learning to combine: Knowledge aggregation for multi-source domain adaptation. In: ECCV.
go back to reference Wang, S., Yu, L., Li, C., Fu, C.W., & Heng, P.A. (2020). Learning from extrinsic and intrinsic supervisions for domain generalization. In: ECCV. Wang, S., Yu, L., Li, C., Fu, C.W., & Heng, P.A. (2020). Learning from extrinsic and intrinsic supervisions for domain generalization. In: ECCV.
go back to reference Xu, R., Chen, Z., Zuo, W., Yan, J., & Lin, L. (2018). Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In: CVPR. Xu, R., Chen, Z., Zuo, W., Yan, J., & Lin, L. (2018). Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In: CVPR.
go back to reference Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., & Yoo, Y. (2019). Cutmix: Regularization strategy to train strong classifiers with localizable features. In: ICCV. Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., & Yoo, Y. (2019). Cutmix: Regularization strategy to train strong classifiers with localizable features. In: ICCV.
go back to reference Zhang, C., Vinyals, O., Munos, R., & Bengio, S. (2018). A study on overfitting in deep reinforcement learning. arXiv preprint arXiv:1804.06893. Zhang, C., Vinyals, O., Munos, R., & Bengio, S. (2018). A study on overfitting in deep reinforcement learning. arXiv preprint arXiv:​1804.​06893.
go back to reference Zhang, H., Cisse, M., Dauphin, Y.N., & Lopez-Paz, D. (2018). Mixup: Beyond empirical risk minimization. In: ICLR. Zhang, H., Cisse, M., Dauphin, Y.N., & Lopez-Paz, D. (2018). Mixup: Beyond empirical risk minimization. In: ICLR.
go back to reference Zhao, H., Zhang, S., Wu, G., Moura, J. M., Costeira, J. P., & Gordon, G. J. (2018). Adversarial multiple source domain adaptation. In: NeurIPS. Zhao, H., Zhang, S., Wu, G., Moura, J. M., Costeira, J. P., & Gordon, G. J. (2018). Adversarial multiple source domain adaptation. In: NeurIPS.
go back to reference Zhao, S., Wang, G., Zhang, S., Gu, Y., Li, Y., Song, Z., Xu, P., Hu, R., Chai, H., & Keutzer, K. (2020). Multi-source distilling domain adaptation. In: AAAI. Zhao, S., Wang, G., Zhang, S., Gu, Y., Li, Y., Song, Z., Xu, P., Hu, R., Chai, H., & Keutzer, K. (2020). Multi-source distilling domain adaptation. In: AAAI.
go back to reference Zhao, Y., Zhong, Z., Luo, Z., Lee, G.H., & Sebe, N. (2021). Source-free open compound domain adaptation in semantic segmentation. arXiv preprint arXiv:2106.03422. Zhao, Y., Zhong, Z., Luo, Z., Lee, G.H., & Sebe, N. (2021). Source-free open compound domain adaptation in semantic segmentation. arXiv preprint arXiv:​2106.​03422.
go back to reference Zhao, Z., Wu, Z., Wu, X., Zhang, C., & Wang, S. (2022). Crossmodal few-shot 3d point cloud semantic segmentation. In: Proceedings of the 30th ACM international conference on multimedia (pp. 4760–4768). Zhao, Z., Wu, Z., Wu, X., Zhang, C., & Wang, S. (2022). Crossmodal few-shot 3d point cloud semantic segmentation. In: Proceedings of the 30th ACM international conference on multimedia (pp. 4760–4768).
go back to reference Zheng, L., Shen, L., Tian, L., Wang, S., Wang, J., & Tian, Q. (2015). Scalable person re-identification: A benchmark. In: ICCV. Zheng, L., Shen, L., Tian, L., Wang, S., Wang, J., & Tian, Q. (2015). Scalable person re-identification: A benchmark. In: ICCV.
go back to reference Zheng, Z., Zheng, L., & Yang, Y. (2017). Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In: ICCV. Zheng, Z., Zheng, L., & Yang, Y. (2017). Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In: ICCV.
go back to reference Zhong, Z., Zheng, L., Kang, G., Li, S., & Yang, Y. (2020). Random erasing data augmentation. In: AAAI. Zhong, Z., Zheng, L., Kang, G., Li, S., & Yang, Y. (2020). Random erasing data augmentation. In: AAAI.
go back to reference Zhou, K., Loy, C. C., & Liu, Z. (2021). Semi-supervised domain generalization with stochastic stylematch. arXiv preprint arXiv:2106.00592. Zhou, K., Loy, C. C., & Liu, Z. (2021). Semi-supervised domain generalization with stochastic stylematch. arXiv preprint arXiv:​2106.​00592.
go back to reference Zhou, K., & Xiang, T. (2019). Torchreid: A library for deep learning person re-identification in pytorch. arXiv preprint arXiv:1910.10093. Zhou, K., & Xiang, T. (2019). Torchreid: A library for deep learning person re-identification in pytorch. arXiv preprint arXiv:​1910.​10093.
go back to reference Zhou, K., Yang, Y., Cavallaro, A., & Xiang, T. (2021). Learning generalisable omni-scale representations for person re-identification. TPAMI. Zhou, K., Yang, Y., Cavallaro, A., & Xiang, T. (2021). Learning generalisable omni-scale representations for person re-identification. TPAMI.
go back to reference Zhou, K., Yang, Y., Hospedales, T. & Xiang, T. (2020). Learning to generate novel domains for domain generalization. In: ECCV. Zhou, K., Yang, Y., Hospedales, T. & Xiang, T. (2020). Learning to generate novel domains for domain generalization. In: ECCV.
go back to reference Zhou, K., Yang, Y., Hospedales, T.M. & Xiang, T. (2020a). Deep domain-adversarial image generation for domain generalisation. In: AAAI. Zhou, K., Yang, Y., Hospedales, T.M. & Xiang, T. (2020a). Deep domain-adversarial image generation for domain generalisation. In: AAAI.
go back to reference Zhou, K., Yang, Y., Qiao, Y., & Xiang, T. (2021). Domain generalization with mixstyle. In: ICLR. Zhou, K., Yang, Y., Qiao, Y., & Xiang, T. (2021). Domain generalization with mixstyle. In: ICLR.
Metadata
Title
MixStyle Neural Networks for Domain Generalization and Adaptation
Authors
Kaiyang Zhou
Yongxin Yang
Yu Qiao
Tao Xiang
Publication date
17-10-2023
Publisher
Springer US
Published in
International Journal of Computer Vision / Issue 3/2024
Print ISSN: 0920-5691
Electronic ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-023-01913-8

Other articles of this Issue 3/2024

International Journal of Computer Vision 3/2024 Go to the issue

Premium Partner