Skip to main content

2019 | OriginalPaper | Buchkapitel

19. Matrix Completion

verfasst von : Ke-Lin Du, M. N. S. Swamy

Erschienen in: Neural Networks and Statistical Learning

Verlag: Springer London

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The recovery of a data matrix from a subset of its entries is an extension of compressed sensing and sparse approximation. This chapter introduces matrix completion and matrix recovery. The ideas are also extended to tensor factorization and completion.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Acar, E., Dunlavy, D. M., Kolda, T. G., & Morup, M. (2011). Scalable tensor factorizations for incomplete data. Chemometrics and Intelligent Laboratory Systems, 106(1), 41–56.CrossRef Acar, E., Dunlavy, D. M., Kolda, T. G., & Morup, M. (2011). Scalable tensor factorizations for incomplete data. Chemometrics and Intelligent Laboratory Systems, 106(1), 41–56.CrossRef
2.
Zurück zum Zitat Argyriou, A., Evgeniou, T., & Pontil, M. (2007). Multi-task feature learning. Advances in neural information processing systems (Vol. 20, pp. 243–272). Argyriou, A., Evgeniou, T., & Pontil, M. (2007). Multi-task feature learning. Advances in neural information processing systems (Vol. 20, pp. 243–272).
3.
Zurück zum Zitat Ashraphijuo, M., & Wang, X. (2017). Fundamental conditions for low-CP-rank tensor completion. Journal of Machine Learning Research, 18, 1–29.MathSciNetMATH Ashraphijuo, M., & Wang, X. (2017). Fundamental conditions for low-CP-rank tensor completion. Journal of Machine Learning Research, 18, 1–29.MathSciNetMATH
4.
Zurück zum Zitat Belkin, M., & Niyogi, P. (2003). Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15, 1373–1396.MATHCrossRef Belkin, M., & Niyogi, P. (2003). Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15, 1373–1396.MATHCrossRef
5.
Zurück zum Zitat Bhaskar, S. A. (2016). Probabilistic low-rank matrix completion from quantized measurements. Journal of Machine Learning Research, 17, 1–34.MathSciNetMATH Bhaskar, S. A. (2016). Probabilistic low-rank matrix completion from quantized measurements. Journal of Machine Learning Research, 17, 1–34.MathSciNetMATH
6.
Zurück zum Zitat Bhojanapalli, S., & Jain, P. (2014). Universal matrix completion. In Proceedings of the 31st International Conference on Machine Learning (pp. 1881–1889). Beijing, China. Bhojanapalli, S., & Jain, P. (2014). Universal matrix completion. In Proceedings of the 31st International Conference on Machine Learning (pp. 1881–1889). Beijing, China.
7.
Zurück zum Zitat Cai, T., & Zhou, W.-X. (2013). A max-norm constrained minimization approach to 1-bit matrix completion. Journal of Machine Learning Research, 14, 3619–3647.MathSciNetMATH Cai, T., & Zhou, W.-X. (2013). A max-norm constrained minimization approach to 1-bit matrix completion. Journal of Machine Learning Research, 14, 3619–3647.MathSciNetMATH
8.
Zurück zum Zitat Cai, J.-F., Candes, E. J., & Shen, Z. (2010). A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4), 1956–1982.MathSciNetMATHCrossRef Cai, J.-F., Candes, E. J., & Shen, Z. (2010). A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4), 1956–1982.MathSciNetMATHCrossRef
9.
Zurück zum Zitat Candes, E. J., & Plan, Y. (2010). Matrix completion with noise. Proceedings of the IEEE, 98(6), 925–936.CrossRef Candes, E. J., & Plan, Y. (2010). Matrix completion with noise. Proceedings of the IEEE, 98(6), 925–936.CrossRef
10.
Zurück zum Zitat Candes, E. J., & Recht, B. (2009). Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6), 717–772.MathSciNetMATHCrossRef Candes, E. J., & Recht, B. (2009). Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6), 717–772.MathSciNetMATHCrossRef
11.
Zurück zum Zitat Candes, E. J., & Tao, T. (2010). The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5), 2053–2080.MathSciNetMATHCrossRef Candes, E. J., & Tao, T. (2010). The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5), 2053–2080.MathSciNetMATHCrossRef
12.
13.
Zurück zum Zitat Cao, Y., & Xie, Y. (2015). Categorical matrix completion. In Proceedings of IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP) (pp. 369–372). Cancun, Mexico. Cao, Y., & Xie, Y. (2015). Categorical matrix completion. In Proceedings of IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP) (pp. 369–372). Cancun, Mexico.
14.
Zurück zum Zitat Carroll, J. D., & Chang, J.-J. (1970). Analysis of individual differences in multidimensional scaling via an \(N\)-way generalization of Eckart-Young decomposition. Psychometrika, 35(3), 283–319.MATHCrossRef Carroll, J. D., & Chang, J.-J. (1970). Analysis of individual differences in multidimensional scaling via an \(N\)-way generalization of Eckart-Young decomposition. Psychometrika, 35(3), 283–319.MATHCrossRef
15.
Zurück zum Zitat Chandrasekaran, V., Sanghavi, S., Parrilo, P. A., & Willsky, A. S. (2009). Sparse and low-rank matrix decompositions. In Proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing (pp. 962–967). Monticello, IL. Chandrasekaran, V., Sanghavi, S., Parrilo, P. A., & Willsky, A. S. (2009). Sparse and low-rank matrix decompositions. In Proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing (pp. 962–967). Monticello, IL.
16.
Zurück zum Zitat Chandrasekaran, V., Sanghavi, S., Parrilo, P. A., & Willsky, A. S. (2011). Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization, 21(2), 572–596.MathSciNetMATHCrossRef Chandrasekaran, V., Sanghavi, S., Parrilo, P. A., & Willsky, A. S. (2011). Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization, 21(2), 572–596.MathSciNetMATHCrossRef
17.
18.
Zurück zum Zitat Chen, Y., & Chi, Y. (2014). Robust spectral compressed sensing via structured matrix completion. IEEE Transactions on Information Theory, 60(10), 6576–6601.MathSciNetMATHCrossRef Chen, Y., & Chi, Y. (2014). Robust spectral compressed sensing via structured matrix completion. IEEE Transactions on Information Theory, 60(10), 6576–6601.MathSciNetMATHCrossRef
19.
Zurück zum Zitat Chen, C., He, B., & Yuan, X. (2012). Matrix completion via an alternating direction method. IMA Journal of Numerical Analysis, 32(1), 227–245.MathSciNetMATHCrossRef Chen, C., He, B., & Yuan, X. (2012). Matrix completion via an alternating direction method. IMA Journal of Numerical Analysis, 32(1), 227–245.MathSciNetMATHCrossRef
20.
Zurück zum Zitat Chen, Y., Jalali, A., Sanghavi, S., & Caramanis, C. (2013). Low-rank matrix recovery from errors and erasures. IEEE Transactions on Information Theory, 59(7), 4324–4337.CrossRef Chen, Y., Jalali, A., Sanghavi, S., & Caramanis, C. (2013). Low-rank matrix recovery from errors and erasures. IEEE Transactions on Information Theory, 59(7), 4324–4337.CrossRef
21.
Zurück zum Zitat Chen, Y., Bhojanapalli, S., Sanghavi, S., & Ward, R. (2015). Completing any low-rank matrix, provably. Journal of Machine Learning Research, 16, 2999–3034.MathSciNetMATH Chen, Y., Bhojanapalli, S., Sanghavi, S., & Ward, R. (2015). Completing any low-rank matrix, provably. Journal of Machine Learning Research, 16, 2999–3034.MathSciNetMATH
22.
Zurück zum Zitat Costantini, R., Sbaiz, L., & Susstrunk, S. (2008). Higher order SVD analysis for dynamic texture synthesis. IEEE Transactions on Image Processing, 17(1), 42–52.MathSciNetCrossRef Costantini, R., Sbaiz, L., & Susstrunk, S. (2008). Higher order SVD analysis for dynamic texture synthesis. IEEE Transactions on Image Processing, 17(1), 42–52.MathSciNetCrossRef
23.
Zurück zum Zitat Davenport, M. A., Plan, Y., van den Berg, E., & Wootters, M. (2014). 1-bit matrix completion. Information and Inference, 3, 189–223.MathSciNetMATHCrossRef Davenport, M. A., Plan, Y., van den Berg, E., & Wootters, M. (2014). 1-bit matrix completion. Information and Inference, 3, 189–223.MathSciNetMATHCrossRef
24.
Zurück zum Zitat De Lathauwer, L., De Moor, B., & Vandewalle, J. (2000). On the best rank-1 and rank-(R1,R2,...,RN) approximation of high-order tensors. SIAM Journal on Matrix Analysis and Applications, 21(4), 1324–1342. De Lathauwer, L., De Moor, B., & Vandewalle, J. (2000). On the best rank-1 and rank-(R1,R2,...,RN) approximation of high-order tensors. SIAM Journal on Matrix Analysis and Applications, 21(4), 1324–1342.
25.
Zurück zum Zitat De Lathauwer, L., De Moor, B., & Vandewalle, J. (2000). A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications, 21(4), 1253–1278.MathSciNetMATHCrossRef De Lathauwer, L., De Moor, B., & Vandewalle, J. (2000). A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications, 21(4), 1253–1278.MathSciNetMATHCrossRef
26.
Zurück zum Zitat Elhamifar, E., & Vidal, R. (2013). Sparse subspace clustering: Algorithm, theory, and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11), 2765–2781.CrossRef Elhamifar, E., & Vidal, R. (2013). Sparse subspace clustering: Algorithm, theory, and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11), 2765–2781.CrossRef
27.
Zurück zum Zitat Eriksson, A., & van den Hengel, A. (2012). Efficient computation of robust weighted low-rank matrix approximations using the \(L_1\) norm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(9), 1681–1690.CrossRef Eriksson, A., & van den Hengel, A. (2012). Efficient computation of robust weighted low-rank matrix approximations using the \(L_1\) norm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(9), 1681–1690.CrossRef
28.
Zurück zum Zitat Fan, J., & Chow, T. W. S. (2018). Non-linear matrix completion. Pattern Recognition, 77, 378–394.CrossRef Fan, J., & Chow, T. W. S. (2018). Non-linear matrix completion. Pattern Recognition, 77, 378–394.CrossRef
29.
Zurück zum Zitat Fazel, M. (2002). Matrix rank minimization with applications. Ph.D. thesis, Stanford University. Fazel, M. (2002). Matrix rank minimization with applications. Ph.D. thesis, Stanford University.
30.
Zurück zum Zitat Foygel, R., & Srebro, N. (2011). Concentration-based guarantees for low-rank matrix reconstruction. In JMLR: Workshop and Conference Proceedings (Vol. 19, pp. 315–339). Foygel, R., & Srebro, N. (2011). Concentration-based guarantees for low-rank matrix reconstruction. In JMLR: Workshop and Conference Proceedings (Vol. 19, pp. 315–339).
31.
Zurück zum Zitat Foygel, R., Shamir, O., Srebro, N., & Salakhutdinov, R. (2011). Learning with the weighted trace-norm under arbitrary sampling distributions. Advances in neural information processing systems (Vol. 24, pp. 2133–2141). Foygel, R., Shamir, O., Srebro, N., & Salakhutdinov, R. (2011). Learning with the weighted trace-norm under arbitrary sampling distributions. Advances in neural information processing systems (Vol. 24, pp. 2133–2141).
32.
Zurück zum Zitat Gandy, S., Recht, B., & Yamada, I. (2011). Tensor completion and low-\(n\)-rank tensor recovery via convex optimization. Inverse Problems, 27(2), 1–19.MathSciNetMATHCrossRef Gandy, S., Recht, B., & Yamada, I. (2011). Tensor completion and low-\(n\)-rank tensor recovery via convex optimization. Inverse Problems, 27(2), 1–19.MathSciNetMATHCrossRef
33.
Zurück zum Zitat Goldfarb, D., & Qin, Z. (2014). Robust low-rank tensor recovery: Models and algorithms. SIAM Journal on Matrix Analysis and Applications, 35(1), 225–253.MathSciNetMATHCrossRef Goldfarb, D., & Qin, Z. (2014). Robust low-rank tensor recovery: Models and algorithms. SIAM Journal on Matrix Analysis and Applications, 35(1), 225–253.MathSciNetMATHCrossRef
34.
Zurück zum Zitat Gross, D. (2011). Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on Information Theory, 57(3), 1548–1566.MathSciNetMATHCrossRef Gross, D. (2011). Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on Information Theory, 57(3), 1548–1566.MathSciNetMATHCrossRef
35.
Zurück zum Zitat Guo, K., Liu, L., Xu, X., Xu, D., & Tao, D. (2018). Godec+: Fast and robust low-rank matrix decomposition based on maximum correntropy. IEEE Transactions on Neural Networks and Learning Systems, 29(6), 2323–2336.MathSciNetCrossRef Guo, K., Liu, L., Xu, X., Xu, D., & Tao, D. (2018). Godec+: Fast and robust low-rank matrix decomposition based on maximum correntropy. IEEE Transactions on Neural Networks and Learning Systems, 29(6), 2323–2336.MathSciNetCrossRef
36.
Zurück zum Zitat Harshman, R. A. (1970). Foundations of the PARAFAC procedure: Models and conditions for an “explanatory” multimodal factor analysis. UCLA Working Papers in Phonetics (Vol. 16, pp. 1–84). Harshman, R. A. (1970). Foundations of the PARAFAC procedure: Models and conditions for an “explanatory” multimodal factor analysis. UCLA Working Papers in Phonetics (Vol. 16, pp. 1–84).
37.
Zurück zum Zitat Hastie, T., Mazumder, R., Lee, J. D., & Zadeh, R. (2015). Matrix completion and low-rank SVD via fast alternating least squares. Journal of Machine Learning Research, 16, 3367–3402.MathSciNetMATH Hastie, T., Mazumder, R., Lee, J. D., & Zadeh, R. (2015). Matrix completion and low-rank SVD via fast alternating least squares. Journal of Machine Learning Research, 16, 3367–3402.MathSciNetMATH
38.
Zurück zum Zitat He, X., Cai, D., Yan, S., & Zhang, H.-J. (2005). Neighborhood preserving embedding. In Proceedings of the 10th IEEE International Conference on Computer Vision (pp. 1208–1213). Beijing, China. He, X., Cai, D., Yan, S., & Zhang, H.-J. (2005). Neighborhood preserving embedding. In Proceedings of the 10th IEEE International Conference on Computer Vision (pp. 1208–1213). Beijing, China.
39.
Zurück zum Zitat He, X., Yan, S., Hu, Y., Niyogi, P., & Zhang, H. J. (2005). Face recognition using Laplacianfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(3), 328–340.CrossRef He, X., Yan, S., Hu, Y., Niyogi, P., & Zhang, H. J. (2005). Face recognition using Laplacianfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(3), 328–340.CrossRef
40.
Zurück zum Zitat Hillar, C. J., & Lim, L.-H. (2013). Most tensor problems are NP-hard. Journal of the ACM, 60(6), Article No. 45, 39 p. Hillar, C. J., & Lim, L.-H. (2013). Most tensor problems are NP-hard. Journal of the ACM, 60(6), Article No. 45, 39 p.
41.
Zurück zum Zitat Hu, R.-X., Jia, W., Huang, D.-S., & Lei, Y.-K. (2010). Maximum margin criterion with tensor representation. Neurocomputing, 73, 1541–1549.CrossRef Hu, R.-X., Jia, W., Huang, D.-S., & Lei, Y.-K. (2010). Maximum margin criterion with tensor representation. Neurocomputing, 73, 1541–1549.CrossRef
42.
Zurück zum Zitat Hu, Y., Zhang, D., Ye, J., Li, X., & He, X. (2013). Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(9), 2117–2130.CrossRef Hu, Y., Zhang, D., Ye, J., Li, X., & He, X. (2013). Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(9), 2117–2130.CrossRef
43.
Zurück zum Zitat Jain, P., & Oh, S. (2014). Provable tensor factorization with missing data. In Advances in neural information processing systems (Vol. 27, pp. 1431–1439). Jain, P., & Oh, S. (2014). Provable tensor factorization with missing data. In Advances in neural information processing systems (Vol. 27, pp. 1431–1439).
44.
Zurück zum Zitat Jain, P., Netrapalli, P., & S. Sanghavi, (2013). Low-rank matrix completion using alternating minimization. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (pp. 665–674). Jain, P., Netrapalli, P., & S. Sanghavi, (2013). Low-rank matrix completion using alternating minimization. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (pp. 665–674).
45.
Zurück zum Zitat Ji, S., & Ye, J. (2009). An accelerated gradient method for trace norm minimization. In Proceedings of the 26th Annual International Conference on Machine Learning (pp. 457–464). Montreal, Canada. Ji, S., & Ye, J. (2009). An accelerated gradient method for trace norm minimization. In Proceedings of the 26th Annual International Conference on Machine Learning (pp. 457–464). Montreal, Canada.
46.
Zurück zum Zitat Ke, Q., & Kanade, T. (2005). Robust \(L_1\) norm factorization in the presence of outliers and missing data by alternative convex programming. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 739–746). San Diego, CA. Ke, Q., & Kanade, T. (2005). Robust \(L_1\) norm factorization in the presence of outliers and missing data by alternative convex programming. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 739–746). San Diego, CA.
47.
Zurück zum Zitat Keshavan, R. H., Montanari, A., & Oh, S. (2010). Matrix completion from a few entries. IEEE Transactions on Information Theory, 56(6), 2980–2998.MathSciNetMATHCrossRef Keshavan, R. H., Montanari, A., & Oh, S. (2010). Matrix completion from a few entries. IEEE Transactions on Information Theory, 56(6), 2980–2998.MathSciNetMATHCrossRef
48.
Zurück zum Zitat Khan, S. A., & Kaski, S. (2014). Bayesian multi-view tensor factorization. In T. Calders, F. Esposito, E. Hullermeier, & R. Meo (Eds.), Proceedings of Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 656-671). Berlin: Springer. Khan, S. A., & Kaski, S. (2014). Bayesian multi-view tensor factorization. In T. Calders, F. Esposito, E. Hullermeier, & R. Meo (Eds.), Proceedings of Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 656-671). Berlin: Springer.
49.
Zurück zum Zitat Kilmer, M. E., Braman, K., Hao, N., & Hoover, R. C. (2013). Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging. SIAM Journal on Matrix Analysis and Applications, 34(1), 148–172.MathSciNetMATHCrossRef Kilmer, M. E., Braman, K., Hao, N., & Hoover, R. C. (2013). Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging. SIAM Journal on Matrix Analysis and Applications, 34(1), 148–172.MathSciNetMATHCrossRef
50.
Zurück zum Zitat Kim, Y.-D., & Choi, S. (2007). Nonnegative Tucker decomposition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–8). Minneapolis, MN. Kim, Y.-D., & Choi, S. (2007). Nonnegative Tucker decomposition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–8). Minneapolis, MN.
51.
Zurück zum Zitat Kim, E., Lee, M., Choi, C.-H., Kwak, N., & Oh, S. (2015). Efficient \(l_1\)-norm-based low-rank matrix approximations for large-scale problems using alternating rectified gradient method. IEEE Transactions on Neural Networks and Learning Systems, 26(2), 237–251.MathSciNetCrossRef Kim, E., Lee, M., Choi, C.-H., Kwak, N., & Oh, S. (2015). Efficient \(l_1\)-norm-based low-rank matrix approximations for large-scale problems using alternating rectified gradient method. IEEE Transactions on Neural Networks and Learning Systems, 26(2), 237–251.MathSciNetCrossRef
53.
Zurück zum Zitat Komodakis, N., & Tziritas, G. (2006). Image completion using global optimization. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 417–424). New York, NY. Komodakis, N., & Tziritas, G. (2006). Image completion using global optimization. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 417–424). New York, NY.
54.
Zurück zum Zitat Koren, Y., Bell, R., & Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, 42(8), 30–37.CrossRef Koren, Y., Bell, R., & Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, 42(8), 30–37.CrossRef
55.
Zurück zum Zitat Krishnamurthy, A., & Singh, A. (2013). Low-rank matrix and tensor completion via adaptive sampling. Advances in neural information processing systems (Vol. 26, pp. 836–844). Krishnamurthy, A., & Singh, A. (2013). Low-rank matrix and tensor completion via adaptive sampling. Advances in neural information processing systems (Vol. 26, pp. 836–844).
56.
Zurück zum Zitat Krishnamurthy, A., & Singh, A. (2014). On the power of adaptivity in matrix completion and approximation. arXiv preprint arXiv:1407.3619. Krishnamurthy, A., & Singh, A. (2014). On the power of adaptivity in matrix completion and approximation. arXiv preprint arXiv:​1407.​3619.
57.
Zurück zum Zitat Lafond, J., Klopp, O., Moulines, E., & Salmon, J. (2014). Probabilistic low-rank matrix completion on finite alphabets. Advances in neural information processing systems (Vol. 27, pp. 1727–1735). Cambridge: MIT Press. Lafond, J., Klopp, O., Moulines, E., & Salmon, J. (2014). Probabilistic low-rank matrix completion on finite alphabets. Advances in neural information processing systems (Vol. 27, pp. 1727–1735). Cambridge: MIT Press.
58.
Zurück zum Zitat Lai, Z., Xu, Y., Yang, J., Tang, J., & Zhang, D. (2013). Sparse tensor discriminant analysis. IEEE Transactions on Image Processing, 22(10), 3904–3915.MathSciNetMATHCrossRef Lai, Z., Xu, Y., Yang, J., Tang, J., & Zhang, D. (2013). Sparse tensor discriminant analysis. IEEE Transactions on Image Processing, 22(10), 3904–3915.MathSciNetMATHCrossRef
59.
Zurück zum Zitat Li, X. (2013). Compressed sensing and matrix completion with constant proportion of corruptions. Constructive Approximation, 37(1), 73–99.MathSciNetMATHCrossRef Li, X. (2013). Compressed sensing and matrix completion with constant proportion of corruptions. Constructive Approximation, 37(1), 73–99.MathSciNetMATHCrossRef
60.
Zurück zum Zitat Lin, Z., Chen, M., Wu, L., & Ma, Y. (2009). The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. Technical Report UILU-ENG-09-2215. Champaign, IL: Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign. Lin, Z., Chen, M., Wu, L., & Ma, Y. (2009). The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. Technical Report UILU-ENG-09-2215. Champaign, IL: Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign.
61.
Zurück zum Zitat Lin, Z., Ganesh, A., Wright, J., Wu, L., Chen, M., & Ma, Y. (2009). Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. Technical Report UILU-ENG-09-2214. Champaign, IL: University of Illinois at Urbana-Champaign. Lin, Z., Ganesh, A., Wright, J., Wu, L., Chen, M., & Ma, Y. (2009). Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. Technical Report UILU-ENG-09-2214. Champaign, IL: University of Illinois at Urbana-Champaign.
62.
Zurück zum Zitat Liu, G., Lin, Z., & Yu, Y. (2010). Robust subspace segmentation by low-rank representation. In Proceedings of the 25th International Conference on Machine Learning (pp. 663–670). Haifa, Israel. Liu, G., Lin, Z., & Yu, Y. (2010). Robust subspace segmentation by low-rank representation. In Proceedings of the 25th International Conference on Machine Learning (pp. 663–670). Haifa, Israel.
63.
Zurück zum Zitat Liu, J., Musialski, P., Wonka, P., & Ye, J. (2013). Tensor completion for estimating missing values in visual data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 208–220.CrossRef Liu, J., Musialski, P., Wonka, P., & Ye, J. (2013). Tensor completion for estimating missing values in visual data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 208–220.CrossRef
64.
Zurück zum Zitat Liu, Y., Jiao, L. C., Shang, F., Yin, F., & Liu, F. (2013). An efficient matrix bi-factorization alternative optimization method for low-rank matrix recovery and completion. Neural Networks, 48, 8–18.MATHCrossRef Liu, Y., Jiao, L. C., Shang, F., Yin, F., & Liu, F. (2013). An efficient matrix bi-factorization alternative optimization method for low-rank matrix recovery and completion. Neural Networks, 48, 8–18.MATHCrossRef
65.
Zurück zum Zitat Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., & Ma, Y. (2013c). Robust recovery of subspace structures by low-rank representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 171–184.CrossRef Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., & Ma, Y. (2013c). Robust recovery of subspace structures by low-rank representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 171–184.CrossRef
66.
Zurück zum Zitat Lu, H., Plataniotis, K. N., & Venetsanopoulos, A. N. (2008). MPCA: Multilinear principal component analysis of tensor objects. IEEE Transactions on Neural Networks, 19(1), 18–39.CrossRef Lu, H., Plataniotis, K. N., & Venetsanopoulos, A. N. (2008). MPCA: Multilinear principal component analysis of tensor objects. IEEE Transactions on Neural Networks, 19(1), 18–39.CrossRef
67.
Zurück zum Zitat Lu, H., Plataniotis, K. N., & Venetsanopoulos, A. N. (2009). Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning. IEEE Transactions on Neural Networks, 20(11), 1820–1836.CrossRef Lu, H., Plataniotis, K. N., & Venetsanopoulos, A. N. (2009). Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning. IEEE Transactions on Neural Networks, 20(11), 1820–1836.CrossRef
68.
Zurück zum Zitat Luo, Y., Tao, D., Ramamohanarao, K., & Xu, C. (2015). Tensor canonical correlation analysis for multi-view dimension reduction. IEEE Transactions on Knowledge and Data Engineering, 27(11), 3111–3124.CrossRef Luo, Y., Tao, D., Ramamohanarao, K., & Xu, C. (2015). Tensor canonical correlation analysis for multi-view dimension reduction. IEEE Transactions on Knowledge and Data Engineering, 27(11), 3111–3124.CrossRef
69.
Zurück zum Zitat Mackey, L., Talwalkar, A., & Jordan, M. I. (2015). Distributed matrix completion and robust factorization. Journal of Machine Learning Research, 16, 913–960.MathSciNetMATH Mackey, L., Talwalkar, A., & Jordan, M. I. (2015). Distributed matrix completion and robust factorization. Journal of Machine Learning Research, 16, 913–960.MathSciNetMATH
70.
Zurück zum Zitat Mu, C., Huang, B., Wright, J., & Goldfarb, D. (2014). Square deal: Lower bounds and improved relaxations for tensor recovery. In JMLR W&CP: Proceedings of the 31st International Conference on Machine Learning (Vol. 32). Beijing, China. Mu, C., Huang, B., Wright, J., & Goldfarb, D. (2014). Square deal: Lower bounds and improved relaxations for tensor recovery. In JMLR W&CP: Proceedings of the 31st International Conference on Machine Learning (Vol. 32). Beijing, China.
71.
Zurück zum Zitat Negahban, S., & Wainwright, M. J. (2012). Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. Journal of Machine Learning Research, 13, 1665–1697.MathSciNetMATH Negahban, S., & Wainwright, M. J. (2012). Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. Journal of Machine Learning Research, 13, 1665–1697.MathSciNetMATH
73.
Zurück zum Zitat Panagakis, Y., Kotropoulos, C., & Arce, G. R. (2010). Non-negative multilinear principal component analysis of auditory temporal modulations for music genre classification. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 18(3), 576–588.CrossRef Panagakis, Y., Kotropoulos, C., & Arce, G. R. (2010). Non-negative multilinear principal component analysis of auditory temporal modulations for music genre classification. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 18(3), 576–588.CrossRef
74.
Zurück zum Zitat Pitaval, R.-A., Dai, W., & Tirkkonen, O. (2015). Convergence of gradient descent for low-rank matrix approximation. IEEE Transactions on Information Theory, 61(8), 4451–4457.MathSciNetMATHCrossRef Pitaval, R.-A., Dai, W., & Tirkkonen, O. (2015). Convergence of gradient descent for low-rank matrix approximation. IEEE Transactions on Information Theory, 61(8), 4451–4457.MathSciNetMATHCrossRef
75.
Zurück zum Zitat Qi, Y., Comon, P., & Lim, L.-H. (2016). Uniqueness of nonnegative tensor approximations. IEEE Transactions on Information Theory, 62(4), 2170–2183.MathSciNetMATHCrossRef Qi, Y., Comon, P., & Lim, L.-H. (2016). Uniqueness of nonnegative tensor approximations. IEEE Transactions on Information Theory, 62(4), 2170–2183.MathSciNetMATHCrossRef
76.
Zurück zum Zitat Recht, B. (2011). A simpler approach to matrix completion. Journal of Machine Learning Research, 12, 3413–3430.MathSciNetMATH Recht, B. (2011). A simpler approach to matrix completion. Journal of Machine Learning Research, 12, 3413–3430.MathSciNetMATH
77.
Zurück zum Zitat Recht, B., Fazel, M., & Parrilo, P. A. (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(3), 471–501.MathSciNetMATHCrossRef Recht, B., Fazel, M., & Parrilo, P. A. (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(3), 471–501.MathSciNetMATHCrossRef
78.
Zurück zum Zitat Rennie, J. D. M., & Srebro, N. (2005). Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22nd International Conference of Machine Learning (pp. 713–719). Bonn, Germany. Rennie, J. D. M., & Srebro, N. (2005). Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22nd International Conference of Machine Learning (pp. 713–719). Bonn, Germany.
79.
Zurück zum Zitat Roweis, S. T., & Saul, L. K. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290, 2323–2326.CrossRef Roweis, S. T., & Saul, L. K. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290, 2323–2326.CrossRef
80.
Zurück zum Zitat Salakhutdinov, R., & Srebro, N. (2010). Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. In J. LaFerty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, & A. Culotta (Eds.), Advances in neural information processing systems (Vol. 23, pp. 2056–2064). Cambridge: MIT Press. Salakhutdinov, R., & Srebro, N. (2010). Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. In J. LaFerty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, & A. Culotta (Eds.), Advances in neural information processing systems (Vol. 23, pp. 2056–2064). Cambridge: MIT Press.
81.
Zurück zum Zitat Shamir, O., & Shalev-Shwartz, S. (2014). Matrix completion with the trace norm: Learning, bounding, and transducing. Journal of Machine Learning Research, 15, 3401–3423.MathSciNetMATH Shamir, O., & Shalev-Shwartz, S. (2014). Matrix completion with the trace norm: Learning, bounding, and transducing. Journal of Machine Learning Research, 15, 3401–3423.MathSciNetMATH
82.
Zurück zum Zitat Sorber, L., Van Barel, M., & De Lathauwer, L. (2013). Optimization-based algorithms for tensor decompositions: Canonical polyadic decomposition, decomposition in rank-\((L_r, L_r, 1)\) terms, and a new generalization. SIAM Journal on Optimization, 23(2), 695–720.MathSciNetMATHCrossRef Sorber, L., Van Barel, M., & De Lathauwer, L. (2013). Optimization-based algorithms for tensor decompositions: Canonical polyadic decomposition, decomposition in rank-\((L_r, L_r, 1)\) terms, and a new generalization. SIAM Journal on Optimization, 23(2), 695–720.MathSciNetMATHCrossRef
83.
Zurück zum Zitat Srebro, N., & Shraibman, A. (2005). Rank, trace-norm and max-norm. In Proceedings of the 18th Annual Conference on Learning Theory (COLT) (pp. 545–560). Berlin: Springer. Srebro, N., & Shraibman, A. (2005). Rank, trace-norm and max-norm. In Proceedings of the 18th Annual Conference on Learning Theory (COLT) (pp. 545–560). Berlin: Springer.
84.
Zurück zum Zitat Srebro, N., Rennie, J. D. M., & Jaakkola, T. S. (2004). Maximum-margin matrix factorization. Advances in neural information processing systems (Vol. 17, pp. 1329–1336). Srebro, N., Rennie, J. D. M., & Jaakkola, T. S. (2004). Maximum-margin matrix factorization. Advances in neural information processing systems (Vol. 17, pp. 1329–1336).
85.
Zurück zum Zitat Sun, W., Huang, L., So, H. C., & Wang, J. (2019). Orthogonal tubal rank-1 tensor pursuit for tensor completion. Signal Processing, 157, 213–224.CrossRef Sun, W., Huang, L., So, H. C., & Wang, J. (2019). Orthogonal tubal rank-1 tensor pursuit for tensor completion. Signal Processing, 157, 213–224.CrossRef
86.
Zurück zum Zitat Takacs, G., Pilaszy, I., Nemeth, B., & Tikk, D. (2009). Scalable collaborative filtering approaches for large recommender systems. Journal of Machine Learning Research, 10, 623–656. Takacs, G., Pilaszy, I., Nemeth, B., & Tikk, D. (2009). Scalable collaborative filtering approaches for large recommender systems. Journal of Machine Learning Research, 10, 623–656.
87.
Zurück zum Zitat Tan, H., Cheng, B., Wang, W., Zhang, Y.-J., & Ran, B. (2014). Tensor completion via a multi-linear low-n-rank factorization model. Neurocomputing, 1(33), 161–169.CrossRef Tan, H., Cheng, B., Wang, W., Zhang, Y.-J., & Ran, B. (2014). Tensor completion via a multi-linear low-n-rank factorization model. Neurocomputing, 1(33), 161–169.CrossRef
88.
Zurück zum Zitat Tao, D., Li, X., Wu, X., Hu, W., & Maybank, S. J. (2007). Supervised tensor learning. Knowledge and Information Systems, 13(1), 1–42.CrossRef Tao, D., Li, X., Wu, X., Hu, W., & Maybank, S. J. (2007). Supervised tensor learning. Knowledge and Information Systems, 13(1), 1–42.CrossRef
89.
Zurück zum Zitat Tao, D., Li, X., Wu, X., & Maybank, S. J. (2007). General tensor discriminant analysis and gabor features for gait recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(10), 1700–17015.CrossRef Tao, D., Li, X., Wu, X., & Maybank, S. J. (2007). General tensor discriminant analysis and gabor features for gait recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(10), 1700–17015.CrossRef
90.
Zurück zum Zitat Tenenbaum, J. B., de Silva, V., & Langford, J. C. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500), 2319–2323.CrossRef Tenenbaum, J. B., de Silva, V., & Langford, J. C. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500), 2319–2323.CrossRef
91.
Zurück zum Zitat Toh, K.-C., & Yun, S. (2010). An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pacific Journal of Optimization, 6(3), 615–640.MathSciNetMATH Toh, K.-C., & Yun, S. (2010). An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pacific Journal of Optimization, 6(3), 615–640.MathSciNetMATH
92.
Zurück zum Zitat Tucker, L. R. (1966). Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3), 279–311.MathSciNetCrossRef Tucker, L. R. (1966). Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3), 279–311.MathSciNetCrossRef
93.
Zurück zum Zitat Tucker, L. R., & Harris, C. W. (1963). Implication of factor analysis of three-way matrices for measurement of change. In C. W. Harris (Ed.), Problems in measuring change (pp. 122–137). Madison: University Wisconsin Press. Tucker, L. R., & Harris, C. W. (1963). Implication of factor analysis of three-way matrices for measurement of change. In C. W. Harris (Ed.), Problems in measuring change (pp. 122–137). Madison: University Wisconsin Press.
94.
Zurück zum Zitat Vasilescu, M. A. O., & Terzopoulos, D. (2002). Multilinear analysis of image ensembles: Tensorfaces. In Proceedigs of European Conference on Computer Vision, LNCS (Vol. 2350, pp. 447–460). Copenhagen, Denmark. Berlin: Springer. Vasilescu, M. A. O., & Terzopoulos, D. (2002). Multilinear analysis of image ensembles: Tensorfaces. In Proceedigs of European Conference on Computer Vision, LNCS (Vol. 2350, pp. 447–460). Copenhagen, Denmark. Berlin: Springer.
95.
Zurück zum Zitat Wang, W., Aggarwal, V., & Aeron, S. (2016). Tensor completion by alternating minimization under the tensor train (TT) model. arXiv:1609.05587. Wang, W., Aggarwal, V., & Aeron, S. (2016). Tensor completion by alternating minimization under the tensor train (TT) model. arXiv:​1609.​05587.
96.
Zurück zum Zitat Wong, R. K. W., & Lee, T. C. M. (2017). Matrix completion with noisy entries and outliers. Journal of Machine Learning Research, 18, 1–25.MathSciNetMATH Wong, R. K. W., & Lee, T. C. M. (2017). Matrix completion with noisy entries and outliers. Journal of Machine Learning Research, 18, 1–25.MathSciNetMATH
97.
Zurück zum Zitat Wright, J., Ganesh, A., Rao, S., Peng, Y., & Ma, Y. (2009). Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. Advances in neural information processing systems (Vol. 22, pp. 2080–2088). Vancouver, Canada. Wright, J., Ganesh, A., Rao, S., Peng, Y., & Ma, Y. (2009). Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. Advances in neural information processing systems (Vol. 22, pp. 2080–2088). Vancouver, Canada.
98.
Zurück zum Zitat Xu, D., Yan, S., Tao, D., Zhang, L., Li, X., & Zhang, H. (2006). Human gait recognition with matrix representation. IEEE Transactions on Circuits and Systems for Video Technology, 16(7), 896–903.CrossRef Xu, D., Yan, S., Tao, D., Zhang, L., Li, X., & Zhang, H. (2006). Human gait recognition with matrix representation. IEEE Transactions on Circuits and Systems for Video Technology, 16(7), 896–903.CrossRef
99.
Zurück zum Zitat Xu, D., Yan, S., Zhang, L., Lin, S., Zhang, H., & Huang, T. S. (2008). Reconstruction and recogntition of tensor-based objects with concurrent subspaces analysis. IEEE Transactions on Circuits and Systems for Video Technology, 18(1), 36–47.CrossRef Xu, D., Yan, S., Zhang, L., Lin, S., Zhang, H., & Huang, T. S. (2008). Reconstruction and recogntition of tensor-based objects with concurrent subspaces analysis. IEEE Transactions on Circuits and Systems for Video Technology, 18(1), 36–47.CrossRef
100.
Zurück zum Zitat Yin, M., Cai, S., & Gao, J. (2013). Robust face recognition via double low-rank matrix recovery for feature extraction. In Proceedings of IEEE International Conference on Image Processing (pp. 3770–3774). Melbourne, Australia. Yin, M., Cai, S., & Gao, J. (2013). Robust face recognition via double low-rank matrix recovery for feature extraction. In Proceedings of IEEE International Conference on Image Processing (pp. 3770–3774). Melbourne, Australia.
101.
Zurück zum Zitat Yin, M., Gao, J., & Lin, Z. (2016). Laplacian regularized low-rank representation and its applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(3), 504–517.CrossRef Yin, M., Gao, J., & Lin, Z. (2016). Laplacian regularized low-rank representation and its applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(3), 504–517.CrossRef
102.
Zurück zum Zitat Yokota, T., Zhao, Q., & Cichocki, A. (2016). Smooth PARAFAC decomposition for tensor completion. IEEE Transactions on Signal Processing, 64(20), 5423–5436.MathSciNetMATHCrossRef Yokota, T., Zhao, Q., & Cichocki, A. (2016). Smooth PARAFAC decomposition for tensor completion. IEEE Transactions on Signal Processing, 64(20), 5423–5436.MathSciNetMATHCrossRef
103.
Zurück zum Zitat Zafeiriou, S. (2009). Discriminant nonnegative tensor factorization algorithms. IEEE Transactions on Neural Networks, 20(2), 217–235.MATHCrossRef Zafeiriou, S. (2009). Discriminant nonnegative tensor factorization algorithms. IEEE Transactions on Neural Networks, 20(2), 217–235.MATHCrossRef
104.
Zurück zum Zitat Zhou, T., & Tao, D. (2011). GoDec: Randomized low-rank & sparse matrix decomposition in noisy case. In Proceedings of the 28th International Conference on Machine Learning (pp. 33–40). Bellevue, WA. Zhou, T., & Tao, D. (2011). GoDec: Randomized low-rank & sparse matrix decomposition in noisy case. In Proceedings of the 28th International Conference on Machine Learning (pp. 33–40). Bellevue, WA.
105.
Zurück zum Zitat Zhou, Y., Wilkinson, D., Schreiber, R., & Pan, R. (2008). Large-scale parallel collaborative filtering for the netix prize. In Proceedings of the 4th International Conference on Algorithmic Aspects in Information and Management (pp. 337–348). Berlin: Springer. Zhou, Y., Wilkinson, D., Schreiber, R., & Pan, R. (2008). Large-scale parallel collaborative filtering for the netix prize. In Proceedings of the 4th International Conference on Algorithmic Aspects in Information and Management (pp. 337–348). Berlin: Springer.
106.
Zurück zum Zitat Zhou, G., Cichocki, A., Zhao, Q., & Xie, S. (2015). Efficient nonnegative tucker decompositions: Algorithms and uniqueness. IEEE Transactions on Image Processing, 24(12), 4990–5003.MathSciNetMATHCrossRef Zhou, G., Cichocki, A., Zhao, Q., & Xie, S. (2015). Efficient nonnegative tucker decompositions: Algorithms and uniqueness. IEEE Transactions on Image Processing, 24(12), 4990–5003.MathSciNetMATHCrossRef
Metadaten
Titel
Matrix Completion
verfasst von
Ke-Lin Du
M. N. S. Swamy
Copyright-Jahr
2019
Verlag
Springer London
DOI
https://doi.org/10.1007/978-1-4471-7452-3_19

Premium Partner