Skip to main content
Erschienen in: International Journal of Computer Vision 8/2021

20.05.2021

Deep CockTail Networks

A Universal Framework for Visual Multi-source Domain Adaptation

verfasst von: Ziliang Chen, Pengxu Wei, Jingyu Zhuang, Guanbin Li, Liang Lin

Erschienen in: International Journal of Computer Vision | Ausgabe 8/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Transferable deep representations for visual domain adaptation (DA) provides a route to learn from labeled source images to recognize target images without the aid of target-domain supervision. Relevant researches increasingly arouse a great amount of interest due to its potential industrial prospect for non-laborious annotation and remarkable generalization. However, DA presumes source images are identically sampled from a single source while Multi-Source DA (MSDA) is ubiquitous in the real-world. In MSDA, the domain shifts exist not only between source and target domains but also among the sources; especially, the multi-source and target domains may disagree on their semantics (e.g., category shifts). This issue challenges the existing solutions for MSDAs. In this paper, we propose Deep CockTail Network (DCTN), a universal and flexibly-deployed framework to address the problems. DCTN uses a multi-way adversarial learning pipeline to minimize the domain discrepancy between the target and each of the multiple in order to learn domain-invariant features. The derived source-specific perplexity scores measure how similar each target feature appears as a feature from one of source domains. The multi-source category classifiers are integrated with the perplexity scores to categorize target images. We accordingly derive a theoretical analysis towards DCTN, including the interpretation why DCTN can be successful without precisely crafting the source-specific hyper-parameters, and target expected loss upper bounds in terms of domain and category shifts. In our experiments, DCTNs have been evaluated on four benchmarks, whose empirical studies involve vanilla and three challenging category-shift transfer problems in MSDA, i.e., source-shift, target-shift and source-target-shift scenarios. The results thoroughly reveal that DCTN significantly boosts classification accuracies in MSDA and performs extraordinarily to resist negative transfers across different MSDA scenarios.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
More precisely, Saito et al. (2018) and Busto and Gall (2017) consider two different open-set problems.
 
2
Since the domain discriminator hasn’t been trained, we take the uniform distribution simplex weight as the perplexity scores.
 
3
Since each sample x corresponds to an unique class y, \(\{{\mathscr {P}}_{j}\}^M_{j=1}\) and \({\mathscr {P}}_t\) can be viewed as an equivalent embedding from \(\{P_{j}(x,y)\}^N_{j=1}\) and \(P_{t}(x,y)\) that we have discussed.
 
Literatur
Zurück zum Zitat Baktashmotlagh, M., Harandi, M., & Salzmann, M. (2016). Distribution-matching embedding for visual domain adaptation. The Journal of Machine Learning Research, 17(1), 3760–3789.MathSciNetMATH Baktashmotlagh, M., Harandi, M., & Salzmann, M. (2016). Distribution-matching embedding for visual domain adaptation. The Journal of Machine Learning Research, 17(1), 3760–3789.MathSciNetMATH
Zurück zum Zitat Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., & Vaughan, J. W. (2010). A theory of learning from different domains. Machine Learning, 79(1), 151–175.MathSciNetCrossRef Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., & Vaughan, J. W. (2010). A theory of learning from different domains. Machine Learning, 79(1), 151–175.MathSciNetCrossRef
Zurück zum Zitat Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., & Wortman, J. (2008). Learning bounds for domain adaptation. In Advances in neural information processing systems (pp. 129–136). Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., & Wortman, J. (2008). Learning bounds for domain adaptation. In Advances in neural information processing systems (pp. 129–136).
Zurück zum Zitat Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., & Krishnan, D. (2017). Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 95–104). Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., & Krishnan, D. (2017). Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 95–104).
Zurück zum Zitat Busto, P. P., & Gall, J. (2017). Open set domain adaptation. In Proceedings of the IEEE international conference on computer vision (pp. 754–763). Busto, P. P., & Gall, J. (2017). Open set domain adaptation. In Proceedings of the IEEE international conference on computer vision (pp. 754–763).
Zurück zum Zitat Cao, Z., Long, M., Wang, J., & Jordan, M. I. (2018). Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2724–2732). Cao, Z., Long, M., Wang, J., & Jordan, M. I. (2018). Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2724–2732).
Zurück zum Zitat Cao, Z., Ma, L., Long, M., & Wang, J. (2018). Partial adversarial domain adaptation. In Proceedings of the European conference on computer vision (pp. 139–155). Cao, Z., Ma, L., Long, M., & Wang, J. (2018). Partial adversarial domain adaptation. In Proceedings of the European conference on computer vision (pp. 139–155).
Zurück zum Zitat Cordts, M., Omran, M., Ramos, S., Scharwächter, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., & Schiele, B. (2015). The cityscapes dataset. In CVPR workshop on the future of datasets in vision. Cordts, M., Omran, M., Ramos, S., Scharwächter, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., & Schiele, B. (2015). The cityscapes dataset. In CVPR workshop on the future of datasets in vision.
Zurück zum Zitat Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 248–255). Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 248–255).
Zurück zum Zitat Duan, L., Xu, D., & Tsang, I. W. H. (2012). Domain adaptation from multiple sources: A domain-dependent regularization approach. IEEE Transactions on Neural Networks and Learning Systems, 23(3), 504–518.CrossRef Duan, L., Xu, D., & Tsang, I. W. H. (2012). Domain adaptation from multiple sources: A domain-dependent regularization approach. IEEE Transactions on Neural Networks and Learning Systems, 23(3), 504–518.CrossRef
Zurück zum Zitat Fernando, B., Habrard, A., Sebban, M., & Tuytelaars, T. (2013). Unsupervised visual domain adaptation using subspace alignment. In Proceedings of the IEEE international conference on computer vision (pp. 2960–2967). Fernando, B., Habrard, A., Sebban, M., & Tuytelaars, T. (2013). Unsupervised visual domain adaptation using subspace alignment. In Proceedings of the IEEE international conference on computer vision (pp. 2960–2967).
Zurück zum Zitat Ganin, Y., & Lempitsky, V. (2015). Unsupervised domain adaptation by backpropagation. In International conference on machine learning (pp. 1180–1189). Ganin, Y., & Lempitsky, V. (2015). Unsupervised domain adaptation by backpropagation. In International conference on machine learning (pp. 1180–1189).
Zurück zum Zitat Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., & Lempitsky, V. (2017). Domain-adversarial training of neural networks. In Domain adaptation in computer vision applications (p. 189). Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., & Lempitsky, V. (2017). Domain-adversarial training of neural networks. In Domain adaptation in computer vision applications (p. 189).
Zurück zum Zitat Gebru, T., Hoffman, J., & Fei-Fei, L. (2017). Fine-grained recognition in the wild: A multi-task domain adaptation approach. In Proceedings of the IEEE international conference on computer vision (pp. 1358–1367). Gebru, T., Hoffman, J., & Fei-Fei, L. (2017). Fine-grained recognition in the wild: A multi-task domain adaptation approach. In Proceedings of the IEEE international conference on computer vision (pp. 1358–1367).
Zurück zum Zitat Ghifary, M., Kleijn, W. B., Zhang, M., Balduzzi, D., & Li, W. (2016). Deep reconstruction-classification networks for unsupervised domain adaptation. In Proceedings of the European conference on computer vision (pp. 597–613). Ghifary, M., Kleijn, W. B., Zhang, M., Balduzzi, D., & Li, W. (2016). Deep reconstruction-classification networks for unsupervised domain adaptation. In Proceedings of the European conference on computer vision (pp. 597–613).
Zurück zum Zitat Gong, B., Grauman, K., & Sha, F. (2014). Learning kernels for unsupervised domain adaptation with applications to visual object recognition. International Journal of Computer Vision, 109(1–2), 3–27.MathSciNetCrossRef Gong, B., Grauman, K., & Sha, F. (2014). Learning kernels for unsupervised domain adaptation with applications to visual object recognition. International Journal of Computer Vision, 109(1–2), 3–27.MathSciNetCrossRef
Zurück zum Zitat Gong, B., Shi, Y., Sha, F., & Grauman, K. (2012). Geodesic flow kernel for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2066–2073). Gong, B., Shi, Y., Sha, F., & Grauman, K. (2012). Geodesic flow kernel for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2066–2073).
Zurück zum Zitat Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672–2680). Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672–2680).
Zurück zum Zitat Gopalan, R., Li, R., & Chellappa, R. (2011). Domain adaptation for object recognition: An unsupervised approach. In Proceedings of the IEEE international conference on computer vision (pp. 999–1006). Gopalan, R., Li, R., & Chellappa, R. (2011). Domain adaptation for object recognition: An unsupervised approach. In Proceedings of the IEEE international conference on computer vision (pp. 999–1006).
Zurück zum Zitat Gretton, A., Borgwardt, K. M., Rasch, M., Schölkopf, B., & Smola, A. J. (2007). A kernel method for the two-sample-problem. In Advances in neural information processing systems (pp. 513–520). Gretton, A., Borgwardt, K. M., Rasch, M., Schölkopf, B., & Smola, A. J. (2007). A kernel method for the two-sample-problem. In Advances in neural information processing systems (pp. 513–520).
Zurück zum Zitat Gretton, A., Smola, A. J., Huang, J., Schmittfull, M., Borgwardt, K. M., & Schölkopf, B. (2009). Covariate shift by kernel mean matching. Dataset Shift in Machine Learning, 3(4), 5. Gretton, A., Smola, A. J., Huang, J., Schmittfull, M., Borgwardt, K. M., & Schölkopf, B. (2009). Covariate shift by kernel mean matching. Dataset Shift in Machine Learning, 3(4), 5.
Zurück zum Zitat Haeusser, P., Frerix, T., Mordvintsev, A., & Cremers, D. (2017). Associative domain adaptation. In Proceedings of the IEEE international conference on computer vision (pp. 2784–2792). Haeusser, P., Frerix, T., Mordvintsev, A., & Cremers, D. (2017). Associative domain adaptation. In Proceedings of the IEEE international conference on computer vision (pp. 2784–2792).
Zurück zum Zitat Ho, H. T., & Gopalan, R. (2014). Model-driven domain adaptation on product manifolds for unconstrained face recognition. International Journal of Computer Vision, 109(1–2), 110–125.CrossRef Ho, H. T., & Gopalan, R. (2014). Model-driven domain adaptation on product manifolds for unconstrained face recognition. International Journal of Computer Vision, 109(1–2), 110–125.CrossRef
Zurück zum Zitat Hoffman, J., Wang, D., Yu, F., & Darrell, T. (2016). Fcns in the wild: Pixel-level adversarial and constraint-based adaptation. arXiv preprint arXiv:1612.02649 Hoffman, J., Wang, D., Yu, F., & Darrell, T. (2016). Fcns in the wild: Pixel-level adversarial and constraint-based adaptation. arXiv preprint arXiv:​1612.​02649
Zurück zum Zitat Jhuo, I. H., Liu, D., Lee, D., & Chang, S. F. (2013a). Robust visual domain adaptation with low-rank reconstruction. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2168–2175). Jhuo, I. H., Liu, D., Lee, D., & Chang, S. F. (2013a). Robust visual domain adaptation with low-rank reconstruction. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2168–2175).
Zurück zum Zitat Jhuo, I. H., Liu, D., Lee, D. T., & Chang, S. F. (2013b). Robust visual domain adaptation with low-rank reconstruction. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2168–2175). Jhuo, I. H., Liu, D., Lee, D. T., & Chang, S. F. (2013b). Robust visual domain adaptation with low-rank reconstruction. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2168–2175).
Zurück zum Zitat Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L., Zitnick, C. L., & Girshick, R. B. (2017). CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1988–1997). Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L., Zitnick, C. L., & Girshick, R. B. (2017). CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1988–1997).
Zurück zum Zitat Kan, M., Wu, J., Shan, S., & Chen, X. (2014). Domain adaptation for face recognition: Targetize source domain bridged by common subspace. International Journal of Computer Vision, 109(1–2), 94–109.CrossRef Kan, M., Wu, J., Shan, S., & Chen, X. (2014). Domain adaptation for face recognition: Targetize source domain bridged by common subspace. International Journal of Computer Vision, 109(1–2), 94–109.CrossRef
Zurück zum Zitat Kim, Y., Cho, D., & Hong, S. (2020). Towards privacy-preserving domain adaptation. IEEE Signal Processing Letters, 27, 1675–1679.CrossRef Kim, Y., Cho, D., & Hong, S. (2020). Towards privacy-preserving domain adaptation. IEEE Signal Processing Letters, 27, 1675–1679.CrossRef
Zurück zum Zitat Kingma, D. P., & Ba, J. (2015). Adam: A method for stochastic optimization. In International conference on learning representations. Kingma, D. P., & Ba, J. (2015). Adam: A method for stochastic optimization. In International conference on learning representations.
Zurück zum Zitat Koniusz, P., Tas, Y., & Porikli, F. (2017). Domain adaptation by mixture of alignments of second-or higher-order scatter tensors. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7139–7148). Koniusz, P., Tas, Y., & Porikli, F. (2017). Domain adaptation by mixture of alignments of second-or higher-order scatter tensors. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7139–7148).
Zurück zum Zitat Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097–1105). Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097–1105).
Zurück zum Zitat LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.CrossRef LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.CrossRef
Zurück zum Zitat Liang, X., Xu, C., Shen, X., Yang, J., Tang, J., Lin, L., et al. (2016). Human parsing with contextualized convolutional neural network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(1), 115–127.CrossRef Liang, X., Xu, C., Shen, X., Yang, J., Tang, J., Lin, L., et al. (2016). Human parsing with contextualized convolutional neural network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(1), 115–127.CrossRef
Zurück zum Zitat Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431–3440). Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431–3440).
Zurück zum Zitat Long, M., Cao, Y., Wang, J., & Jordan, M. (2015). Learning transferable features with deep adaptation networks. In International conference on machine learning (pp. 97–105). Long, M., Cao, Y., Wang, J., & Jordan, M. (2015). Learning transferable features with deep adaptation networks. In International conference on machine learning (pp. 97–105).
Zurück zum Zitat Long, M., Zhu, H., Wang, J., & Jordan, M. I. (2016). Unsupervised domain adaptation with residual transfer networks. In Advances in neural information processing systems (pp. 136–144). Long, M., Zhu, H., Wang, J., & Jordan, M. I. (2016). Unsupervised domain adaptation with residual transfer networks. In Advances in neural information processing systems (pp. 136–144).
Zurück zum Zitat Long, M., Zhu, H., Wang, J., & Jordan, M. I. (2017). Deep transfer learning with joint adaptation networks. In Proceedings of the international conference on machine learning (pp. 2208–2217). Long, M., Zhu, H., Wang, J., & Jordan, M. I. (2017). Deep transfer learning with joint adaptation networks. In Proceedings of the international conference on machine learning (pp. 2208–2217).
Zurück zum Zitat Lu, H., Zhang, L., Cao, Z., Wei, W., Xian, K., Shen, C., & van den Hengel, A. (2017). When unsupervised domain adaptation meets tensor representations. In Proceedings of the IEEE international conference on computer vision (pp. 599–608). Lu, H., Zhang, L., Cao, Z., Wei, W., Xian, K., Shen, C., & van den Hengel, A. (2017). When unsupervised domain adaptation meets tensor representations. In Proceedings of the IEEE international conference on computer vision (pp. 599–608).
Zurück zum Zitat Maaten, L. V. D., & Hinton, G. (2008). Visualizing data using t-sne. Journal of Machine Learning Research, 9, 2579–2605.MATH Maaten, L. V. D., & Hinton, G. (2008). Visualizing data using t-sne. Journal of Machine Learning Research, 9, 2579–2605.MATH
Zurück zum Zitat Mancini, M., Porzi, L., Bulò, S. R., Caputo, B., & Ricci, E. (2018). Boosting domain adaptation by discovering latent domains. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3771–3780). Mancini, M., Porzi, L., Bulò, S. R., Caputo, B., & Ricci, E. (2018). Boosting domain adaptation by discovering latent domains. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3771–3780).
Zurück zum Zitat Mansour, Y., Mohri, M., & Rostamizadeh, A. (2009). Domain adaptation with multiple sources. In Advances in neural information processing systems (pp. 1041–1048). Mansour, Y., Mohri, M., & Rostamizadeh, A. (2009). Domain adaptation with multiple sources. In Advances in neural information processing systems (pp. 1041–1048).
Zurück zum Zitat Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., & Smolley, S. P. (2017). Least squares generative adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2794–2802). Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., & Smolley, S. P. (2017). Least squares generative adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2794–2802).
Zurück zum Zitat Motiian, S., Jones, Q., Iranmanesh, S. M., & Doretto, G. (2017). Few-shot adversarial domain adaptation. In Advances in neural information processing systems (pp. 6670–6680). Motiian, S., Jones, Q., Iranmanesh, S. M., & Doretto, G. (2017). Few-shot adversarial domain adaptation. In Advances in neural information processing systems (pp. 6670–6680).
Zurück zum Zitat Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., & Ng, A. Y. (2011). Reading digits in natural images with unsupervised feature learning. In Nips workshop on deep learning and unsupervised feature learning. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., & Ng, A. Y. (2011). Reading digits in natural images with unsupervised feature learning. In Nips workshop on deep learning and unsupervised feature learning.
Zurück zum Zitat Pan, S. J., Tsang, I. W., Kwok, J. T., & Yang, Q. (2011). Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2), 199–210.CrossRef Pan, S. J., Tsang, I. W., Kwok, J. T., & Yang, Q. (2011). Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2), 199–210.CrossRef
Zurück zum Zitat Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345–1359.CrossRef Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345–1359.CrossRef
Zurück zum Zitat Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., & Wang, B. (2019). Moment matching for multi-source domain adaptation. In Proceedings of the IEEE international conference on computer vision (pp. 1406–1415). Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., & Wang, B. (2019). Moment matching for multi-source domain adaptation. In Proceedings of the IEEE international conference on computer vision (pp. 1406–1415).
Zurück zum Zitat Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91–99). Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91–99).
Zurück zum Zitat Saenko, K., Kulis, B., Fritz, M., & Darrell, T. (2010). Adapting visual category models to new domains. In Proceedings of the European conference on computer vision (pp. 213–226). Saenko, K., Kulis, B., Fritz, M., & Darrell, T. (2010). Adapting visual category models to new domains. In Proceedings of the European conference on computer vision (pp. 213–226).
Zurück zum Zitat Saito, K., Ushiku, Y., & Harada, T. (2017). Asymmetric tri-training for unsupervised domain adaptation. In Proceedings of the international conference on machine learning (pp. 2988–2997). Saito, K., Ushiku, Y., & Harada, T. (2017). Asymmetric tri-training for unsupervised domain adaptation. In Proceedings of the international conference on machine learning (pp. 2988–2997).
Zurück zum Zitat Saito, K., Yamamoto, S., Ushiku, Y., & Harada, T. (2018). Open set domain adaptation by backpropagation. In Proceedings of the European conference on computer vision (pp. 156–171). Saito, K., Yamamoto, S., Ushiku, Y., & Harada, T. (2018). Open set domain adaptation by backpropagation. In Proceedings of the European conference on computer vision (pp. 156–171).
Zurück zum Zitat Shao, M., Kit, D., & Fu, Y. (2014). Generalized transfer subspace learning through low-rank constraint. International Journal of Computer Vision, 109(1–2), 74–93.MathSciNetCrossRef Shao, M., Kit, D., & Fu, Y. (2014). Generalized transfer subspace learning through low-rank constraint. International Journal of Computer Vision, 109(1–2), 74–93.MathSciNetCrossRef
Zurück zum Zitat Sun, B., Feng, J., & Saenko, K. (2016). Return of frustratingly easy domain adaptation. In AAAI conference on artificial intelligence (pp. 2058–2065). Sun, B., Feng, J., & Saenko, K. (2016). Return of frustratingly easy domain adaptation. In AAAI conference on artificial intelligence (pp. 2058–2065).
Zurück zum Zitat Tzeng, E., Hoffman, J., Darrell, T., & Saenko, K. (2015). Simultaneous deep transfer across domains and tasks. In Proceedings of the IEEE international conference on computer vision (pp. 4068–4076). Tzeng, E., Hoffman, J., Darrell, T., & Saenko, K. (2015). Simultaneous deep transfer across domains and tasks. In Proceedings of the IEEE international conference on computer vision (pp. 4068–4076).
Zurück zum Zitat Tzeng, E., Hoffman, J., Saenko, K., & Darrell, T. (2017). Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2962–2971). Tzeng, E., Hoffman, J., Saenko, K., & Darrell, T. (2017). Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2962–2971).
Zurück zum Zitat Xie, J., Hu, W., Zhu, S. C., & Wu, Y. N. (2015). Learning sparse frame models for natural image patterns. International Journal of Computer Vision, 114(2–3), 91–112.MathSciNetCrossRef Xie, J., Hu, W., Zhu, S. C., & Wu, Y. N. (2015). Learning sparse frame models for natural image patterns. International Journal of Computer Vision, 114(2–3), 91–112.MathSciNetCrossRef
Zurück zum Zitat Xu, J., Ramos, S., Vázquez, D., & López, A. M. (2016). Hierarchical adaptive structural SVM for domain adaptation. International Journal of Computer Vision, 119(2), 159–178.MathSciNetCrossRef Xu, J., Ramos, S., Vázquez, D., & López, A. M. (2016). Hierarchical adaptive structural SVM for domain adaptation. International Journal of Computer Vision, 119(2), 159–178.MathSciNetCrossRef
Zurück zum Zitat Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., & Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning (pp. 2048–2057). Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., & Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning (pp. 2048–2057).
Zurück zum Zitat Xu, R., Chen, Z., Zuo, W., Yan, J., & Lin, L. (2018). Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3964–3973). Xu, R., Chen, Z., Zuo, W., Yan, J., & Lin, L. (2018). Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3964–3973).
Zurück zum Zitat Xu, R., Li, G., Yang, J., & Lin, L. (2019). Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation. In Proceedings of the IEEE international conference on computer vision (pp. 1426–1435). Xu, R., Li, G., Yang, J., & Lin, L. (2019). Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation. In Proceedings of the IEEE international conference on computer vision (pp. 1426–1435).
Zurück zum Zitat Yan, H., Ding, Y., Li, P., Wang, Q., Xu, Y., & Zuo, W. (2017). Mind the class weight bias: Weighted maximum mean discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 945–954). Yan, H., Ding, Y., Li, P., Wang, Q., Xu, Y., & Zuo, W. (2017). Mind the class weight bias: Weighted maximum mean discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 945–954).
Zurück zum Zitat Yang, J., Yan, R., & Hauptmann, A. G. (2007). Cross-domain video concept detection using adaptive svms. In Proceedings of the ACM international conference on multimedia (pp. 188–197). Yang, J., Yan, R., & Hauptmann, A. G. (2007). Cross-domain video concept detection using adaptive svms. In Proceedings of the ACM international conference on multimedia (pp. 188–197).
Zurück zum Zitat Yao, Y., Zhang, Y., Li, X., & Ye, Y. (2019). Heterogeneous domain adaptation via soft transfer network. In Proceedings of the 27th ACM international conference on multimedia (pp. 1578–1586). Yao, Y., Zhang, Y., Li, X., & Ye, Y. (2019). Heterogeneous domain adaptation via soft transfer network. In Proceedings of the 27th ACM international conference on multimedia (pp. 1578–1586).
Zurück zum Zitat You, K., Long, M., Cao, Z., Wang, J., & Jordan, M. I. (2019). Universal domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2720–2729). You, K., Long, M., Cao, Z., Wang, J., & Jordan, M. I. (2019). Universal domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2720–2729).
Zurück zum Zitat Zellinger, W., Grubinger, T., Lughofer, E., Natschläger, T., & Samingerplatz, S. (2017). Central moment discrepancy (cmd) for domain-invariant representation learning. In International conference on learning representations. Zellinger, W., Grubinger, T., Lughofer, E., Natschläger, T., & Samingerplatz, S. (2017). Central moment discrepancy (cmd) for domain-invariant representation learning. In International conference on learning representations.
Zurück zum Zitat Zhang, J., Li, W., & Ogunbona, P. (2017). Joint geometrical and statistical alignment for visual domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5150–5158). Zhang, J., Li, W., & Ogunbona, P. (2017). Joint geometrical and statistical alignment for visual domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5150–5158).
Zurück zum Zitat Zhang, S., Huang, J. B., Lim, J., Gong, Y., Wang, J., Ahuja, N., & Yang, M. H. (2019). Tracking persons-of-interest via unsupervised representation adaptation. International Journal of Computer Vision, 1–25. Zhang, S., Huang, J. B., Lim, J., Gong, Y., Wang, J., Ahuja, N., & Yang, M. H. (2019). Tracking persons-of-interest via unsupervised representation adaptation. International Journal of Computer Vision, 1–25.
Zurück zum Zitat Zhao, H., Zhang, S., Wu, G., Costeira, J. P., Moura, J. M. F., & Gordon, G. J. (2018). Multiple source domain adaptation with adversarial learning. In International conference on learning representations Zhao, H., Zhang, S., Wu, G., Costeira, J. P., Moura, J. M. F., & Gordon, G. J. (2018). Multiple source domain adaptation with adversarial learning. In International conference on learning representations
Metadaten
Titel
Deep CockTail Networks
A Universal Framework for Visual Multi-source Domain Adaptation
verfasst von
Ziliang Chen
Pengxu Wei
Jingyu Zhuang
Guanbin Li
Liang Lin
Publikationsdatum
20.05.2021
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 8/2021
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-021-01463-x

Weitere Artikel der Ausgabe 8/2021

International Journal of Computer Vision 8/2021 Zur Ausgabe