Skip to main content
Erschienen in: International Journal of Computer Vision 1-2/2014

01.08.2014

Generalized Transfer Subspace Learning Through Low-Rank Constraint

verfasst von: Ming Shao, Dmitry Kit, Yun Fu

Erschienen in: International Journal of Computer Vision | Ausgabe 1-2/2014

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

It is expensive to obtain labeled real-world visual data for use in training of supervised algorithms. Therefore, it is valuable to leverage existing databases of labeled data. However, the data in the source databases is often obtained under conditions that differ from those in the new task. Transfer learning provides techniques for transferring learned knowledge from a source domain to a target domain by finding a mapping between them. In this paper, we discuss a method for projecting both source and target data to a generalized subspace where each target sample can be represented by some combination of source samples. By employing a low-rank constraint during this transfer, the structure of source and target domains are preserved. This approach has three benefits. First, good alignment between the domains is ensured through the use of only relevant data in some subspace of the source domain in reconstructing the data in the target domain. Second, the discriminative power of the source domain is naturally passed on to the target domain. Third, noisy information will be filtered out during knowledge transfer. Extensive experiments on synthetic data, and important computer vision problems such as face recognition application and visual domain adaptation for object recognition demonstrate the superiority of the proposed approach over the existing, well-established methods.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
So far, we still consider larger energies as being better for the subspace learning method. However, this will change later once we start minimizing rather than maximizing the objective function.
 
2
We use minimal of PCA instead of maximal to fit the LTSL.
 
3
Note that we use ULPP and UNPE to denote unsupervised LPP and NPE, and SLPP and SNPE to denote supervised LPP and NPE.
 
Literatur
Zurück zum Zitat Argyriou, A., Evgeniou, T., & Pontil, M. (2007). Multi-task feature learning. In Advances in neural information processing systems, Cambridge, MA: The MIT Press. Argyriou, A., Evgeniou, T., & Pontil, M. (2007). Multi-task feature learning. In Advances in neural information processing systems, Cambridge, MA: The MIT Press.
Zurück zum Zitat Arnold, A., Nallapati, R., & Cohen, W. (2007). A comparative study of methods for transductive transfer learning. IEEE International Conference on Data Mining (Workshops), 77–82. Arnold, A., Nallapati, R., & Cohen, W. (2007). A comparative study of methods for transductive transfer learning. IEEE International Conference on Data Mining (Workshops), 77–82.
Zurück zum Zitat Aytar, Y., & Zisserman, A. (2011). Tabula rasa: Model transfer for object category detection. IEEE International Conference on Computer Vision, 2252–2259. Aytar, Y., & Zisserman, A. (2011). Tabula rasa: Model transfer for object category detection. IEEE International Conference on Computer Vision, 2252–2259.
Zurück zum Zitat Bartels, R. H., & Stewart, G. (1972). Solution of the matrix equation ax+ xb= c [f4]. Communications of the ACM, 15(9), 820–826.CrossRef Bartels, R. H., & Stewart, G. (1972). Solution of the matrix equation ax+ xb= c [f4]. Communications of the ACM, 15(9), 820–826.CrossRef
Zurück zum Zitat Belhumeur, P., Hespanha, J., & Kriegman, D. (2002). Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 711–720.CrossRef Belhumeur, P., Hespanha, J., & Kriegman, D. (2002). Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 711–720.CrossRef
Zurück zum Zitat Belkin, M., & Niyogi, P. (2003). Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6), 1373–1396.CrossRefMATH Belkin, M., & Niyogi, P. (2003). Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6), 1373–1396.CrossRefMATH
Zurück zum Zitat Blitzer, J., McDonald, R., & Pereira, F. (2006). Domain adaptation with structural correspondence learning. In Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, (pp. 120–128). Blitzer, J., McDonald, R., & Pereira, F. (2006). Domain adaptation with structural correspondence learning. In Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, (pp. 120–128).
Zurück zum Zitat Blitzer, J., Foster, D., & Kakade, S. (2011). Domain adaptation with coupled subspaces. Journal of Machine Learning Research-Proceedings Track, 15, 173–181. Blitzer, J., Foster, D., & Kakade, S. (2011). Domain adaptation with coupled subspaces. Journal of Machine Learning Research-Proceedings Track, 15, 173–181.
Zurück zum Zitat Cai, J. F., Candès, E. J., & Shen, Z. (2010). A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20, 1956–1982.CrossRefMATHMathSciNet Cai, J. F., Candès, E. J., & Shen, Z. (2010). A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20, 1956–1982.CrossRefMATHMathSciNet
Zurück zum Zitat Candès, E., & Recht, B. (2009). Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6), 717–772.CrossRefMATHMathSciNet Candès, E., & Recht, B. (2009). Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6), 717–772.CrossRefMATHMathSciNet
Zurück zum Zitat Candes, E., Li, X., Ma, Y., & Wright, J. (2011). Robust principal component analysis? Journal of the ACM. Candes, E., Li, X., Ma, Y., & Wright, J. (2011). Robust principal component analysis? Journal of the ACM.
Zurück zum Zitat Chen, M., Weinberger, K., & Blitzer, J. (2011). Co-training for domain adaptation. Advances in Neural Information Processing Systems. Chen, M., Weinberger, K., & Blitzer, J. (2011). Co-training for domain adaptation. Advances in Neural Information Processing Systems.
Zurück zum Zitat Coppersmith, D., & Winograd, S. (1990). Matrix multiplication via arithmetic progressions. Journal of Symbolic Computation, 9(3), 251–280.CrossRefMATHMathSciNet Coppersmith, D., & Winograd, S. (1990). Matrix multiplication via arithmetic progressions. Journal of Symbolic Computation, 9(3), 251–280.CrossRefMATHMathSciNet
Zurück zum Zitat Dai, W., Xue, G., Yang, Q., & Yu, Y. (2007). Co-clustering based classification for out-of-domain documents. In ACM SIGKDD International Conference on Knowledge Discovery And Data Mining (ACM), (pp. 210–219). Dai, W., Xue, G., Yang, Q., & Yu, Y. (2007). Co-clustering based classification for out-of-domain documents. In ACM SIGKDD International Conference on Knowledge Discovery And Data Mining (ACM), (pp. 210–219).
Zurück zum Zitat Dai, W., Xue, G.R., Yang, Q., & Yu, Y. (2007b). Transferring naive bayes classifiers for text classification. In AAAI Conference on Artificial Intelligence (pp. 540–545). Dai, W., Xue, G.R., Yang, Q., & Yu, Y. (2007b). Transferring naive bayes classifiers for text classification. In AAAI Conference on Artificial Intelligence (pp. 540–545).
Zurück zum Zitat Dai, W., Yang, Q., Xue, G., & Yu, Y. (2007c). Boosting for transfer learning. In International Conference on Machine learning, ACM (pp. 193–200). Dai, W., Yang, Q., Xue, G., & Yu, Y. (2007c). Boosting for transfer learning. In International Conference on Machine learning, ACM (pp. 193–200).
Zurück zum Zitat Daumé, H. (2007). Frustratingly easy domain adaptation. Annual Meeting-Association for Computational Linguistics, 45, 256–263. Daumé, H. (2007). Frustratingly easy domain adaptation. Annual Meeting-Association for Computational Linguistics, 45, 256–263.
Zurück zum Zitat Daumé, H, I. I. I., & Marcu, D. (2006). Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 26(1), 101–126.MATHMathSciNet Daumé, H, I. I. I., & Marcu, D. (2006). Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 26(1), 101–126.MATHMathSciNet
Zurück zum Zitat Duan, L., Tsang, I.W., Xu, D., & Chua, T.S. (2009). Domain adaptation from multiple sources via auxiliary classifiers. In International Conference on Machine Learning, ACM (pp. 289–296). Duan, L., Tsang, I.W., Xu, D., & Chua, T.S. (2009). Domain adaptation from multiple sources via auxiliary classifiers. In International Conference on Machine Learning, ACM (pp. 289–296).
Zurück zum Zitat Duan, L., Xu, D., & Chang, S.F. (2012a). Exploiting web images for event recognition in consumer videos: A multiple source domain adaptation approach. IEEE Conference on Computer Vision and Pattern Recognition, 1338–1345. Duan, L., Xu, D., & Chang, S.F. (2012a). Exploiting web images for event recognition in consumer videos: A multiple source domain adaptation approach. IEEE Conference on Computer Vision and Pattern Recognition, 1338–1345.
Zurück zum Zitat Duan, L., Xu, D., & Tsang, I. (2012b). Domain adaptation from multiple sources: A domain-dependent regularization approach. IEEE Transactions on Neural Networks and Learning Systems, 23(3), 504–518.CrossRef Duan, L., Xu, D., & Tsang, I. (2012b). Domain adaptation from multiple sources: A domain-dependent regularization approach. IEEE Transactions on Neural Networks and Learning Systems, 23(3), 504–518.CrossRef
Zurück zum Zitat Duan, L., Xu, D., Tsang, I. W. H., & Luo, J. (2012c). Visual event recognition in videos by learning from web data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(9), 1667–1680.CrossRef Duan, L., Xu, D., Tsang, I. W. H., & Luo, J. (2012c). Visual event recognition in videos by learning from web data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(9), 1667–1680.CrossRef
Zurück zum Zitat Eckstein, J., & Bertsekas, D. (1992). On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming, 55(1), 293–318.CrossRefMATHMathSciNet Eckstein, J., & Bertsekas, D. (1992). On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming, 55(1), 293–318.CrossRefMATHMathSciNet
Zurück zum Zitat Gao, J., Fan, W., Jiang, J., & Han, J. (2008). Knowledge transfer via multiple model local structure mapping. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM (pp. 283–291). Gao, J., Fan, W., Jiang, J., & Han, J. (2008). Knowledge transfer via multiple model local structure mapping. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM (pp. 283–291).
Zurück zum Zitat Glorot, X., Bordes, A., & Bengio, Y. (2011). Domain adaptation for large-scale sentiment classification: A deep learning approach. In International Conference on Machine Learning, ACM (pp. 513–520). Glorot, X., Bordes, A., & Bengio, Y. (2011). Domain adaptation for large-scale sentiment classification: A deep learning approach. In International Conference on Machine Learning, ACM (pp. 513–520).
Zurück zum Zitat Gong, B., Shi, Y., Sha, F., & Grauman, K. (2012). Geodesic flow kernel for unsupervised domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 2066–2073). Gong, B., Shi, Y., Sha, F., & Grauman, K. (2012). Geodesic flow kernel for unsupervised domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 2066–2073).
Zurück zum Zitat Gopalan, R., Li, R., & Chellappa, R. (2011). Domain adaptation for object recognition: An unsupervised approach. In IEEE International Conference on Computer Vision (pp. 999–1006). Gopalan, R., Li, R., & Chellappa, R. (2011). Domain adaptation for object recognition: An unsupervised approach. In IEEE International Conference on Computer Vision (pp. 999–1006).
Zurück zum Zitat Griffin, G., Holub, A., & Perona, P. (2007). Caltech-256 object category dataset. California Institute of Technology: Tech. rep. Griffin, G., Holub, A., & Perona, P. (2007). Caltech-256 object category dataset. California Institute of Technology: Tech. rep.
Zurück zum Zitat He, X., & Niyogi, P. (2004). Locality preserving projections. In Advances in neural information processing systems, vol 16. Cambridge, MA: The MIT Press. He, X., & Niyogi, P. (2004). Locality preserving projections. In Advances in neural information processing systems, vol 16. Cambridge, MA: The MIT Press.
Zurück zum Zitat He, X., Cai, D., Yan, S., & Zhang, H. (2005). Neighborhood preserving embedding. IEEE International Conference on Computer Vision, 2, 1208–1213. He, X., Cai, D., Yan, S., & Zhang, H. (2005). Neighborhood preserving embedding. IEEE International Conference on Computer Vision, 2, 1208–1213.
Zurück zum Zitat Ho, J., Yang, M., Lim, J., Lee, K., & Kriegman, D. (2003). Clustering appearances of objects under varying illumination conditions, vol 1. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–11). Ho, J., Yang, M., Lim, J., Lee, K., & Kriegman, D. (2003). Clustering appearances of objects under varying illumination conditions, vol 1. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–11).
Zurück zum Zitat Hoffman, J., Rodner, E., Donahue, J., Saenko, K., & Darrell, T. (2013). Efficient learning of domain-invariant image representations. arXiv, preprint arXiv:13013224. Hoffman, J., Rodner, E., Donahue, J., Saenko, K., & Darrell, T. (2013). Efficient learning of domain-invariant image representations. arXiv, preprint arXiv:13013224.
Zurück zum Zitat Jhuo, I. H., Liu, D., Lee, D., & Chang. S. F. (2012) Robust visual domain adaptation with low-rank reconstruction. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 2168–2175). Jhuo, I. H., Liu, D., Lee, D., & Chang. S. F. (2012) Robust visual domain adaptation with low-rank reconstruction. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 2168–2175).
Zurück zum Zitat Jiang, J., & Zhai, C. (2007). Instance weighting for domain adaptation in NLP. Annual Meeting-Association for Computational Linguistics, 45, 264–271. Jiang, J., & Zhai, C. (2007). Instance weighting for domain adaptation in NLP. Annual Meeting-Association for Computational Linguistics, 45, 264–271.
Zurück zum Zitat Jiang, W., Zavesky, E., Chang, S. F., & Loui, A. (2008). Cross-domain learning methods for high-level visual concept classification. In IEEE International Conference on Image Processing (pp. 161–164). Jiang, W., Zavesky, E., Chang, S. F., & Loui, A. (2008). Cross-domain learning methods for high-level visual concept classification. In IEEE International Conference on Image Processing (pp. 161–164).
Zurück zum Zitat Keshavan, R., Montanari, A., & Oh, S. (2010). Matrix completion from noisy entries. The Journal of Machine Learning Research, 99, 2057–2078.MathSciNet Keshavan, R., Montanari, A., & Oh, S. (2010). Matrix completion from noisy entries. The Journal of Machine Learning Research, 99, 2057–2078.MathSciNet
Zurück zum Zitat Kulis, B., Jain, P., & Grauman, K. (2009). Fast similarity search for learned metrics. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(12), 2143–2157.CrossRef Kulis, B., Jain, P., & Grauman, K. (2009). Fast similarity search for learned metrics. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(12), 2143–2157.CrossRef
Zurück zum Zitat Kulis, B., Saenko, K., & Darrell, T. (2011). What you saw is not what you get: Domain adaptation using asymmetric kernel transforms. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 1785–1792). Kulis, B., Saenko, K., & Darrell, T. (2011). What you saw is not what you get: Domain adaptation using asymmetric kernel transforms. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 1785–1792).
Zurück zum Zitat Lawrence, N., Platt, J. (2004). Learning to learn with the informative vector machine. In International Conference on Machine learning, ACM (pp. 65–72). Lawrence, N., Platt, J. (2004). Learning to learn with the informative vector machine. In International Conference on Machine learning, ACM (pp. 65–72).
Zurück zum Zitat Lim, J., Salakhutdinov, R., & Torralba, A. (2011). Transfer learning by borrowing examples for multiclass object detection. In Advances in neural information processing systems. Cambridge, MA: The MIT Press. Lim, J., Salakhutdinov, R., & Torralba, A. (2011). Transfer learning by borrowing examples for multiclass object detection. In Advances in neural information processing systems. Cambridge, MA: The MIT Press.
Zurück zum Zitat Lin, Z., Chen, M., Wu, L., & Ma, Y. (2009). The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. Technical Report, UILU-ENG-09-2215. Lin, Z., Chen, M., Wu, L., & Ma, Y. (2009). The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. Technical Report, UILU-ENG-09-2215.
Zurück zum Zitat Liu, G., Lin, Z., & Yu, Y. (2010). Robust subspace segmentation by low-rank representation. In International Conference on Machine Learning (pp. 663–670). Liu, G., Lin, Z., & Yu, Y. (2010). Robust subspace segmentation by low-rank representation. In International Conference on Machine Learning (pp. 663–670).
Zurück zum Zitat Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., & Ma, Y. (2013). Robust recovery of subspace structures by low-rank representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 171–184.CrossRef Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., & Ma, Y. (2013). Robust recovery of subspace structures by low-rank representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 171–184.CrossRef
Zurück zum Zitat Lopez-Paz, D., Hernndez-Lobato, J., & Schölkopf, B. (2012). Semi-supervised domain adaptation with non-parametric copulas. In: Advances in neural information processing systems. Cambridge, MA: The MIT Press. Lopez-Paz, D., Hernndez-Lobato, J., & Schölkopf, B. (2012). Semi-supervised domain adaptation with non-parametric copulas. In: Advances in neural information processing systems. Cambridge, MA: The MIT Press.
Zurück zum Zitat Lu, L., & Vidal, R. (2006). Combined central and subspace clustering for computer vision applications. In International Conference on Machine Learning, ACM (pp. 593–600). Lu, L., & Vidal, R. (2006). Combined central and subspace clustering for computer vision applications. In International Conference on Machine Learning, ACM (pp. 593–600).
Zurück zum Zitat Mihalkova, L., Huynh, T., & Mooney, R. (2007). Mapping and revising markov logic networks for transfer learning. In AAAI Conference on Artificial Intelligence (pp. 608–614). Mihalkova, L., Huynh, T., & Mooney, R. (2007). Mapping and revising markov logic networks for transfer learning. In AAAI Conference on Artificial Intelligence (pp. 608–614).
Zurück zum Zitat Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345–1359.CrossRef Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345–1359.CrossRef
Zurück zum Zitat Qi, G.J., Aggarwal, C., Rui, Y., Tian, Q., Chang, S., & Huang, T. (2011). Towards cross-category knowledge propagation for learning visual concepts. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 897–904). Qi, G.J., Aggarwal, C., Rui, Y., Tian, Q., Chang, S., & Huang, T. (2011). Towards cross-category knowledge propagation for learning visual concepts. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 897–904).
Zurück zum Zitat Raina, R., Battle, A., Lee, H., Packer, B., & Ng, A. (2007). Self-taught learning: Transfer learning from unlabeled data. In International Conference on Machine Learning (pp. 759–766). Raina, R., Battle, A., Lee, H., Packer, B., & Ng, A. (2007). Self-taught learning: Transfer learning from unlabeled data. In International Conference on Machine Learning (pp. 759–766).
Zurück zum Zitat Roweis, S. T., & Saul, L. K. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500), 2323–2326.CrossRef Roweis, S. T., & Saul, L. K. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500), 2323–2326.CrossRef
Zurück zum Zitat Saenko, K., Kulis, B., Fritz, M., & Darrell, T. (2010). Adapting visual category models to new domains. In European Computer Vision Conference (pp. 213–226). Saenko, K., Kulis, B., Fritz, M., & Darrell, T. (2010). Adapting visual category models to new domains. In European Computer Vision Conference (pp. 213–226).
Zurück zum Zitat Shao, M., Xia, S., & Fu, Y., (2011). Genealogical face recognition based on ub kinface database. In IEEE Conference on Computer Vision and Pattern Recognition (Workshop on Biometrics) (pp. 65–70). Shao, M., Xia, S., & Fu, Y., (2011). Genealogical face recognition based on ub kinface database. In IEEE Conference on Computer Vision and Pattern Recognition (Workshop on Biometrics) (pp. 65–70).
Zurück zum Zitat Shao, M., Castillo, C., Gu, Z., & Fu, Y. (2012). Low-rank transfer subspace learning. In IEEE International Conference on Data Mining (pp. 1104–1109). Shao, M., Castillo, C., Gu, Z., & Fu, Y. (2012). Low-rank transfer subspace learning. In IEEE International Conference on Data Mining (pp. 1104–1109).
Zurück zum Zitat Si, S., Tao, D., & Geng, B. (2010). Bregman divergence-based regularization for transfer subspace learning. IEEE Transactions on Knowledge and Data Engineering, 22(7), 929–942.CrossRef Si, S., Tao, D., & Geng, B. (2010). Bregman divergence-based regularization for transfer subspace learning. IEEE Transactions on Knowledge and Data Engineering, 22(7), 929–942.CrossRef
Zurück zum Zitat Sun, Q., Chattopadhyay, R., Panchanathan, S., & Ye, J. (2011). A two-stage weighting framework for multi-source domain adaptation. In Advances in neural information processing systems. Cambridge, MA: The MIT Press. Sun, Q., Chattopadhyay, R., Panchanathan, S., & Ye, J. (2011). A two-stage weighting framework for multi-source domain adaptation. In Advances in neural information processing systems. Cambridge, MA: The MIT Press.
Zurück zum Zitat Turk, M., & Pentland, A. (1991). Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1), 71–86.CrossRef Turk, M., & Pentland, A. (1991). Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1), 71–86.CrossRef
Zurück zum Zitat Wang, Z., Song, Y., & Zhang, C. (2008). Transferred dimensionality reduction. In Machine learning and knowledge discovery in databases (pp. 550–565). New York: Springer. Wang, Z., Song, Y., & Zhang, C. (2008). Transferred dimensionality reduction. In Machine learning and knowledge discovery in databases (pp. 550–565). New York: Springer.
Zurück zum Zitat Wright, J., Ganesh, A., Rao, S., Peng, Y., & Ma, Y. (2009). Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. Advances in Neural Information Processing Systems, 22, 2080–2088. Wright, J., Ganesh, A., Rao, S., Peng, Y., & Ma, Y. (2009). Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. Advances in Neural Information Processing Systems, 22, 2080–2088.
Zurück zum Zitat Yan, S., Xu, D., Zhang, B., Zhang, H., Yang, Q., & Lin, S. (2007). Graph embedding and extensions: A general framework for dimensionality reduction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(1), 40–51. Yan, S., Xu, D., Zhang, B., Zhang, H., Yang, Q., & Lin, S. (2007). Graph embedding and extensions: A general framework for dimensionality reduction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(1), 40–51.
Zurück zum Zitat Yang, J., Yan, R., & Hauptmann, A.G. (2007). Cross-domain video concept detection using adaptive svms. In International Conference on Multimedia, ACM (pp. 188–197). Yang, J., Yan, R., & Hauptmann, A.G. (2007). Cross-domain video concept detection using adaptive svms. In International Conference on Multimedia, ACM (pp. 188–197).
Zurück zum Zitat Yang, J., Yin, W., Zhang, Y., & Wang, Y. (2009). A fast algorithm for edge-preserving variational multichannel image restoration. SIAM Journal on Imaging Sciences, 2(2), 569–592.CrossRefMATHMathSciNet Yang, J., Yin, W., Zhang, Y., & Wang, Y. (2009). A fast algorithm for edge-preserving variational multichannel image restoration. SIAM Journal on Imaging Sciences, 2(2), 569–592.CrossRefMATHMathSciNet
Zurück zum Zitat Zhang, C., Ye, J., & Zhang, L. (2012). Generalization bounds for domain adaptation. In Advances in neural information processing systems, Cambridge, MA: The MIT Press. Zhang, C., Ye, J., & Zhang, L. (2012). Generalization bounds for domain adaptation. In Advances in neural information processing systems, Cambridge, MA: The MIT Press.
Zurück zum Zitat Zhang, T., Tao, D., & Yang, J. (2008). Discriminative locality alignment. In European conference on computer vision (pp. 725–738). New York: Springer. Zhang, T., Tao, D., & Yang, J. (2008). Discriminative locality alignment. In European conference on computer vision (pp. 725–738). New York: Springer.
Metadaten
Titel
Generalized Transfer Subspace Learning Through Low-Rank Constraint
verfasst von
Ming Shao
Dmitry Kit
Yun Fu
Publikationsdatum
01.08.2014
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 1-2/2014
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-014-0696-6

Weitere Artikel der Ausgabe 1-2/2014

International Journal of Computer Vision 1-2/2014 Zur Ausgabe