Skip to main content
Erschienen in: International Journal of Computer Vision 1-2/2014

01.08.2014

Domain Adaptation for Face Recognition: Targetize Source Domain Bridged by Common Subspace

verfasst von: Meina Kan, Junting Wu, Shiguang Shan, Xilin Chen

Erschienen in: International Journal of Computer Vision | Ausgabe 1-2/2014

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In many applications, a face recognition model learned on a source domain but applied to a novel target domain degenerates even significantly due to the mismatch between the two domains. Aiming at learning a better face recognition model for the target domain, this paper proposes a simple but effective domain adaptation approach that transfers the supervision knowledge from a labeled source domain to the unlabeled target domain. Our basic idea is to convert the source domain images to target domain (termed as targetize the source domain hereinafter), and at the same time keep its supervision information. For this purpose, each source domain image is simply represented as a linear combination of sparse target domain neighbors in the image space, with the combination coefficients however learnt in a common subspace. The principle behind this strategy is that, the common knowledge is only favorable for accurate cross-domain reconstruction, but for the classification in the target domain, the specific knowledge of the target domain is also essential and thus should be mostly preserved (through targetization in the image space in this work). To discover the common knowledge, specifically, a common subspace is learnt, in which the structures of both domains are preserved and meanwhile the disparity of source and target domains is reduced. The proposed method is extensively evaluated under three face recognition scenarios, i.e., domain adaptation across view angle, domain adaptation across ethnicity and domain adaptation across imaging condition. The experimental results illustrate the superiority of our method over those competitive ones.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
1
In probability theory, the support of a probability distribution can be loosely thought of as the closure of the set of possible values of a random variable having that distribution. Here it can be regarded as the closure of the set of all possible instances.
 
2
In this experiment the performance of ITL is even worse than PCA, however this does not mean the inferiority of ITL since the data distribution in this setting does not agree with the assumption of ITL: ITL assumes that the data in both source and target domains are tightly clustered, and clusters from both domains are aligned if they correspond to the same class. In this setting here, the source and target domains only have several samples in each class which are difficult to form a tight cluster, and even worse the samples from the source and target domains are from totally different classes.
 
Literatur
Zurück zum Zitat Belhumeur, P. N., Hespanha, J. P., & Kriegman, D. J. (1997). Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 19(7), 711–720.CrossRef Belhumeur, P. N., Hespanha, J. P., & Kriegman, D. J. (1997). Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 19(7), 711–720.CrossRef
Zurück zum Zitat Ben-David, S., Blitzer, J., Crammer, K., & Pereira, F. (2007). Analysis of representations for domain adaptation. Advances in Neural Information Processing Systems NIPS, 19, 137–144. Ben-David, S., Blitzer, J., Crammer, K., & Pereira, F. (2007). Analysis of representations for domain adaptation. Advances in Neural Information Processing Systems NIPS, 19, 137–144.
Zurück zum Zitat Bickel, S., Brückner, M., & Scheffer, T. (2009). Discriminative learning under covariate shift. The Journal of Machine Learning Research (JMLR), 10, 2137–2155.MATH Bickel, S., Brückner, M., & Scheffer, T. (2009). Discriminative learning under covariate shift. The Journal of Machine Learning Research (JMLR), 10, 2137–2155.MATH
Zurück zum Zitat Blitzer, J., McDonald, R., & Pereira, F. (2006). Domain adaptation with structural correspondence learning. In Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 120–128). Blitzer, J., McDonald, R., & Pereira, F. (2006). Domain adaptation with structural correspondence learning. In Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 120–128).
Zurück zum Zitat Bruzzone, L., & Marconcini, M. (2010). Domain adaptation problems: a dasvm classification technique and a circular validation strategy. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 32(5), 770–787.CrossRef Bruzzone, L., & Marconcini, M. (2010). Domain adaptation problems: a dasvm classification technique and a circular validation strategy. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 32(5), 770–787.CrossRef
Zurück zum Zitat Chen, Y., Wang, G., & Dong, S. (2003). Learning with progressive transductive support vector machine. Pattern Recognition Letters (PRL), 24(12), 1845–1855.CrossRef Chen, Y., Wang, G., & Dong, S. (2003). Learning with progressive transductive support vector machine. Pattern Recognition Letters (PRL), 24(12), 1845–1855.CrossRef
Zurück zum Zitat Donoho, D. L. (2006). For most large underdetermined systems of linear equations the minimal l\(_{1}\)-norm solution is also the sparsest solution. Communications on Pure and Applied Mathematics, 59(6), 797–829.CrossRefMATHMathSciNet Donoho, D. L. (2006). For most large underdetermined systems of linear equations the minimal l\(_{1}\)-norm solution is also the sparsest solution. Communications on Pure and Applied Mathematics, 59(6), 797–829.CrossRefMATHMathSciNet
Zurück zum Zitat Duan, L., Tsang, I. W., Xu, D., & Maybank, S. J. (2009). Domain transfer svm for video concept detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1375–1381). Duan, L., Tsang, I. W., Xu, D., & Maybank, S. J. (2009). Domain transfer svm for video concept detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1375–1381).
Zurück zum Zitat Duan, L., Xu, D., Tsang, I., & Luo, J. (2012). Visual event recognition in videos by learning from web data. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 34(9), 1667–1680.CrossRef Duan, L., Xu, D., Tsang, I., & Luo, J. (2012). Visual event recognition in videos by learning from web data. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 34(9), 1667–1680.CrossRef
Zurück zum Zitat Dudık, M., Schapire, R. E., & Phillips, S. J. (2005). Correcting sample selection bias in maximum entropy density estimation. Advances in Neural Information Processing Systems (NIPS), 17, 323–330. Dudık, M., Schapire, R. E., & Phillips, S. J. (2005). Correcting sample selection bias in maximum entropy density estimation. Advances in Neural Information Processing Systems (NIPS), 17, 323–330.
Zurück zum Zitat Efron, B., Hastie, T., Johnstone, I., & Tibshirani, R. (2004). Least angle regression. Annals of Statistics, 39(4), 407–499.MathSciNet Efron, B., Hastie, T., Johnstone, I., & Tibshirani, R. (2004). Least angle regression. Annals of Statistics, 39(4), 407–499.MathSciNet
Zurück zum Zitat Gao, X., Wang, X., Li, X., & Tao, D. (2011). Transfer latent variable model based on divergence analysis. Pattern Recognition (PR), 44(10–11), 2358–2366.CrossRefMATH Gao, X., Wang, X., Li, X., & Tao, D. (2011). Transfer latent variable model based on divergence analysis. Pattern Recognition (PR), 44(10–11), 2358–2366.CrossRefMATH
Zurück zum Zitat Geng, B., Tao, D., & Xu, C. (2011). Daml: Domain adaptation metric learning. IEEE Transactions on Image Processing (T-IP), 20(10), 2980–2989.CrossRefMathSciNet Geng, B., Tao, D., & Xu, C. (2011). Daml: Domain adaptation metric learning. IEEE Transactions on Image Processing (T-IP), 20(10), 2980–2989.CrossRefMathSciNet
Zurück zum Zitat Gong, B., Shi, Y., Sha, F., & Grauman, K. (2012). Geodesic flow kernel for unsupervised domain adaptation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 0, 2066–2073. Gong, B., Shi, Y., Sha, F., & Grauman, K. (2012). Geodesic flow kernel for unsupervised domain adaptation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 0, 2066–2073.
Zurück zum Zitat Gopalan, R., Li, R., & Chellappa, R. (2011). Domain adaptation for object recognition: An unsupervised approach. In IEEE International Conference on Computer Vision (ICCV) (pp. 999–1006). Gopalan, R., Li, R., & Chellappa, R. (2011). Domain adaptation for object recognition: An unsupervised approach. In IEEE International Conference on Computer Vision (ICCV) (pp. 999–1006).
Zurück zum Zitat Gretton, A., Smola, A., Huang, J., Schmittfull, M., Borgwardt, K., & Schölkopf, B. (2009). Covariate shift by kernel mean matching. Dataset shift in machine learning (pp. 131–160). Cambridge: MIT Press. Gretton, A., Smola, A., Huang, J., Schmittfull, M., Borgwardt, K., & Schölkopf, B. (2009). Covariate shift by kernel mean matching. Dataset shift in machine learning (pp. 131–160). Cambridge: MIT Press.
Zurück zum Zitat Gross, R., Matthews, I., Cohn, J., kanada, T., & Baker, S. (2007). The cmu multi-pose, illumination, and expression (multi-pie) face database. Tech. rep., Carnegie Mellon University Robotics Institute. TR-07-08. Gross, R., Matthews, I., Cohn, J., kanada, T., & Baker, S. (2007). The cmu multi-pose, illumination, and expression (multi-pie) face database. Tech. rep., Carnegie Mellon University Robotics Institute. TR-07-08.
Zurück zum Zitat Hal, DI. (2009). Bayesian multitask learning with latent hierarchies. In Conference on Uncertainty in Artificial Intelligence (UAI) (pp. 135–142). Hal, DI. (2009). Bayesian multitask learning with latent hierarchies. In Conference on Uncertainty in Artificial Intelligence (UAI) (pp. 135–142).
Zurück zum Zitat He, X., & Niyogi, P. (2004). Locality preserving projections. Advances in Neural Information Processing Systems NIPS, 16, 153–160. He, X., & Niyogi, P. (2004). Locality preserving projections. Advances in Neural Information Processing Systems NIPS, 16, 153–160.
Zurück zum Zitat Huang, J., Smola, A. J., Gretton, A., Borgwardt, K. M., & Schölkopf, B. (2006). Correcting sample selection bias by unlabeled data. In Advances in Neural Information Processing Systems (NIPS). Huang, J., Smola, A. J., Gretton, A., Borgwardt, K. M., & Schölkopf, B. (2006). Correcting sample selection bias by unlabeled data. In Advances in Neural Information Processing Systems (NIPS).
Zurück zum Zitat Huang, K., & Aviyente, S. (2007). Sparse representation for signal classification. Advances in Neural Information Processing Systems NIPS, 19, 609–616. Huang, K., & Aviyente, S. (2007). Sparse representation for signal classification. Advances in Neural Information Processing Systems NIPS, 19, 609–616.
Zurück zum Zitat Jhuo, IH., Liu, D., Lee, D. T., & Chang, S. F. (2012). Robust visual domain adaptation with low-rank reconstruction. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 2168–2175). Jhuo, IH., Liu, D., Lee, D. T., & Chang, S. F. (2012). Robust visual domain adaptation with low-rank reconstruction. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 2168–2175).
Zurück zum Zitat Jia, Y., Nie, F., & Zhang, C. (2009). Trace ratio problem revisited. IEEE Transactions on Neural Networks (T-NN), 20(4), 729–735.CrossRef Jia, Y., Nie, F., & Zhang, C. (2009). Trace ratio problem revisited. IEEE Transactions on Neural Networks (T-NN), 20(4), 729–735.CrossRef
Zurück zum Zitat Liu, C., & Wechsler, H. (2002). Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition. IEEE Transactions on Image Processing (T-IP), 11(4), 467–476.CrossRef Liu, C., & Wechsler, H. (2002). Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition. IEEE Transactions on Image Processing (T-IP), 11(4), 467–476.CrossRef
Zurück zum Zitat Mehrotra, R., Agrawal, R., Haider, S. A. (2012). Dictionary based sparse representation for domain adaptation. In ACM International Conference on Information and Knowledge Management (CIKM) (pp. 2395–2398). Mehrotra, R., Agrawal, R., Haider, S. A. (2012). Dictionary based sparse representation for domain adaptation. In ACM International Conference on Information and Knowledge Management (CIKM) (pp. 2395–2398).
Zurück zum Zitat Messer, K., Matas, M., Kittler, J., Lttin, J., & Maitre, G. (1999). Xm2vtsdb: The extended m2vts database. In International Conference on Audio and Video-based Biometric Person Authentication (AVBPA) (pp. 72–77). Messer, K., Matas, M., Kittler, J., Lttin, J., & Maitre, G. (1999). Xm2vtsdb: The extended m2vts database. In International Conference on Audio and Video-based Biometric Person Authentication (AVBPA) (pp. 72–77).
Zurück zum Zitat Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering (T-KDE), 22(10), 1345–1359.CrossRef Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering (T-KDE), 22(10), 1345–1359.CrossRef
Zurück zum Zitat Pan, S. J., Kwok, J. T., & Yang, Q. (2008) Transfer learning via dimensionality reduction. In AAAI Conference on Artificial Intelligence (AAAI) (pp. 677–682). Pan, S. J., Kwok, J. T., & Yang, Q. (2008) Transfer learning via dimensionality reduction. In AAAI Conference on Artificial Intelligence (AAAI) (pp. 677–682).
Zurück zum Zitat Pan, S. J., Tsang, I. W., Kwok, J. T., Yang, Q. (2009). Domain adaptation via transfer component analysis. In International Joint Conferences on Artificial Intelligence (IJCAI) (pp. 1187–1192). Pan, S. J., Tsang, I. W., Kwok, J. T., Yang, Q. (2009). Domain adaptation via transfer component analysis. In International Joint Conferences on Artificial Intelligence (IJCAI) (pp. 1187–1192).
Zurück zum Zitat Pan, S. J., Tsang, I. W., Kwok, J. T., & Yang, Q. (2011). Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks (T-NN), 22(2), 199–210.CrossRef Pan, S. J., Tsang, I. W., Kwok, J. T., & Yang, Q. (2011). Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks (T-NN), 22(2), 199–210.CrossRef
Zurück zum Zitat Phillips, P. J., Flynn, P. J., Scruggs, T., Bowyer, K. W., Chang, J., Hoffman, K., et al. (2005). Overview of the face recognition grand challenge. IEEE Conference on Computer Vision and Pattern Recognition CVPR, 1, 947–954. Phillips, P. J., Flynn, P. J., Scruggs, T., Bowyer, K. W., Chang, J., Hoffman, K., et al. (2005). Overview of the face recognition grand challenge. IEEE Conference on Computer Vision and Pattern Recognition CVPR, 1, 947–954.
Zurück zum Zitat Qiu, Q., Patel, V. M., Turaga, P., & Chellappa, R. (2012). Domain adaptive dictionary learning. In European Conference on Computer Vision (ECCV) (pp. 631–645). Qiu, Q., Patel, V. M., Turaga, P., & Chellappa, R. (2012). Domain adaptive dictionary learning. In European Conference on Computer Vision (ECCV) (pp. 631–645).
Zurück zum Zitat Raina, R., Battle, A., Lee, H., Packer, B., Ng, A. Y. (2007). Self-taught learning: transfer learning from unlabeled data. In International Conference on Machine Learning (ICML) (pp 759–766). Raina, R., Battle, A., Lee, H., Packer, B., Ng, A. Y. (2007). Self-taught learning: transfer learning from unlabeled data. In International Conference on Machine Learning (ICML) (pp 759–766).
Zurück zum Zitat Roweis, S. T., & Saul, L. K. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500), 2323–2326. Roweis, S. T., & Saul, L. K. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500), 2323–2326.
Zurück zum Zitat Shao, M., Castillo, C., Gu, Z., Fu, Y. (2012). Low-rank transfer subspace learning. In IEEE International Conference on Data Mining (ICDM) (pp. 1104–1109). Shao, M., Castillo, C., Gu, Z., Fu, Y. (2012). Low-rank transfer subspace learning. In IEEE International Conference on Data Mining (ICDM) (pp. 1104–1109).
Zurück zum Zitat Shi, Y., & Sha, F. (2012). Information-theoretical learning of discriminative clusters for unsupervised domain adaptation. In International Conference on Machine Learning (ICML). Shi, Y., & Sha, F. (2012). Information-theoretical learning of discriminative clusters for unsupervised domain adaptation. In International Conference on Machine Learning (ICML).
Zurück zum Zitat Shimodaira, Hidetoshi. (2000). Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2), 227–244. Shimodaira, Hidetoshi. (2000). Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2), 227–244.
Zurück zum Zitat Si, S., Tao, D., & Geng, B. (2010). Bregman divergence-based regularization for transfer subspace learning. IEEE Transactions on Knowledge and Data Engineering T-KDE, 22(7), 929–942.CrossRef Si, S., Tao, D., & Geng, B. (2010). Bregman divergence-based regularization for transfer subspace learning. IEEE Transactions on Knowledge and Data Engineering T-KDE, 22(7), 929–942.CrossRef
Zurück zum Zitat Si, S., Liu, W., Tao, D., & Chan, K. P. (2011). Distribution calibration in riemannian symmetric space. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 41(4), 921–930.CrossRef Si, S., Liu, W., Tao, D., & Chan, K. P. (2011). Distribution calibration in riemannian symmetric space. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 41(4), 921–930.CrossRef
Zurück zum Zitat Su, Y., Shan, S., Chen, X., & Gao, W. (2009). Hierarchical ensemble of global and local classifiers for face recognition. IEEE Transactions on Image Processing T-IP, 18(8), 1885–1896.CrossRefMathSciNet Su, Y., Shan, S., Chen, X., & Gao, W. (2009). Hierarchical ensemble of global and local classifiers for face recognition. IEEE Transactions on Image Processing T-IP, 18(8), 1885–1896.CrossRefMathSciNet
Zurück zum Zitat Sugiyama, M., Nakajima, S., Kashima, H., Buenau, P. V., & Kawanabe, M. (2008). Direct importance estimation with model selection and its application to covariate shift adaptation. In: Advances in Neural Information Processing Systems NIPS, 20, 1433–1440. Sugiyama, M., Nakajima, S., Kashima, H., Buenau, P. V., & Kawanabe, M. (2008). Direct importance estimation with model selection and its application to covariate shift adaptation. In: Advances in Neural Information Processing Systems NIPS, 20, 1433–1440.
Zurück zum Zitat Sugiyamai, M., Krauledat, M., & Müller, K. R. (2007). Covariate shift adaptation by importance weighted cross validation. The Journal of Machine Learning Research (JMLR), 8, 985–1005. Sugiyamai, M., Krauledat, M., & Müller, K. R. (2007). Covariate shift adaptation by importance weighted cross validation. The Journal of Machine Learning Research (JMLR), 8, 985–1005.
Zurück zum Zitat Turk, M. A., & Pentland, A. P. (1991). Face recognition using eigenfaces. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 591, 586–591. Turk, M. A., & Pentland, A. P. (1991). Face recognition using eigenfaces. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 591, 586–591.
Zurück zum Zitat Uribe, D. (2010). Domain adaptation in sentiment classification. In International Conference on Machine Learning and Applications (ICMLA) (pp. 857–860). Uribe, D. (2010). Domain adaptation in sentiment classification. In International Conference on Machine Learning and Applications (ICMLA) (pp. 857–860).
Zurück zum Zitat Wang, H., Yan, S., Xu, D., Tang, X., & Huang, T. (2007). Trace ratio vs. ratio trace for dimensionality reduction. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1–8). Wang, H., Yan, S., Xu, D., Tang, X., & Huang, T. (2007). Trace ratio vs. ratio trace for dimensionality reduction. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1–8).
Zurück zum Zitat Wang, Z., Song, Y., Zhang, C. (2008). Transferred dimensionality reduction. In European Conference on Principles of Data Mining and Knowledge Discovery (PKDD) (pp. 550–565). Wang, Z., Song, Y., Zhang, C. (2008). Transferred dimensionality reduction. In European Conference on Principles of Data Mining and Knowledge Discovery (PKDD) (pp. 550–565).
Zurück zum Zitat Wright, J., Yang, A. Y., Ganesh, A., Sastry, S. S., & Ma, Y. (2009). Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 31(2), 210–227.CrossRef Wright, J., Yang, A. Y., Ganesh, A., Sastry, S. S., & Ma, Y. (2009). Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 31(2), 210–227.CrossRef
Zurück zum Zitat Xue, Y., Liao, X., Carin, L., & Krishnapuram, B. (2007). Multi-task learning for classification with dirichlet process priors. The Journal of Machine Learning Research (JMLR), 8, 35–63.MATHMathSciNet Xue, Y., Liao, X., Carin, L., & Krishnapuram, B. (2007). Multi-task learning for classification with dirichlet process priors. The Journal of Machine Learning Research (JMLR), 8, 35–63.MATHMathSciNet
Zurück zum Zitat Zadrozny, & Bianca (2004). Learning and evaluating classifiers under sample selection bias. In Proceedings of International Conference on Machine Learning (ICML) (p. 114). Zadrozny, & Bianca (2004). Learning and evaluating classifiers under sample selection bias. In Proceedings of International Conference on Machine Learning (ICML) (p. 114).
Metadaten
Titel
Domain Adaptation for Face Recognition: Targetize Source Domain Bridged by Common Subspace
verfasst von
Meina Kan
Junting Wu
Shiguang Shan
Xilin Chen
Publikationsdatum
01.08.2014
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 1-2/2014
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-013-0693-1

Weitere Artikel der Ausgabe 1-2/2014

International Journal of Computer Vision 1-2/2014 Zur Ausgabe

Premium Partner