Skip to main content
Log in

Randomized algorithms for the approximations of Tucker and the tensor train decompositions

  • Published:
Advances in Computational Mathematics Aims and scope Submit manuscript

Abstract

Randomized algorithms provide a powerful tool for scientific computing. Compared with standard deterministic algorithms, randomized algorithms are often faster and robust. The main purpose of this paper is to design adaptive randomized algorithms for computing the approximate tensor decompositions. We give an adaptive randomized algorithm for the computation of a low multilinear rank approximation of the tensors with unknown multilinear rank and analyze its probabilistic error bound under certain assumptions. Finally, we design an adaptive randomized algorithm for computing the tensor train approximations of the tensors. Based on the bounds about the singular values of sub-Gaussian matrices with independent columns or independent rows, we analyze these randomized algorithms. We illustrate our adaptive randomized algorithms via several numerical examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bader, B.W., Kolda, T.G., et al.: Matlab tensor toolbox version 3.0-dev. Available online. https://www.tensortoolbox.org (2017)

  2. Ballani, J., Grasedyck, L., Kluge, M.: Black box approximation of tensors in Hierarchical Tucker format. Linear Algebra Appl. 438, 639–657 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  3. Beylkin, G., Mohlenkamp, M.J.: Algorithms for numerical analysis in high dimensions. SIAM J. Sci. Comput. 26, 2133–2159 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  4. Caiafa, C.F., Cichocki, A.: Generalizing the column-row matrix decomposition to multi-way arrays. Linear Algebra Appl. 433, 557–573 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  5. Chan, T.F.: Rank revealing Q R factorizations. Linear Algebra Appl. 88/89, 67–82 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  6. Che, M., Cichocki, A., Wei, Y.: Neural networks for computing best rank-one approximations of tensors and its applications. Neurocomputing 267, 124–133 (2017)

    Article  Google Scholar 

  7. Cichocki, A.: Tensor networks for big data analytics and large-scale optimization problems, arXiv:1407.3124 (2014)

  8. Cichocki, A., Zdunek, R., Phan, A.H., Amari, S.-I.: Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation. Wiley, New York (2009)

    Book  Google Scholar 

  9. Comon, P.: Tensor decompositions: state of the art and applications, in Mathematics in signal processing, V (Coventry, 2000), vol. 71 of Inst. Math. Appl. Conf. Ser. New Ser., pp. 1–24. Oxford University Press, Oxford (2002)

    Google Scholar 

  10. De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21, 1253–1278 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  11. De Lathauwer, L.: On the best rank-1 and rank-(r 1,r 2,? ,r n) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl. 21, 1324–1342 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  12. Dolgov, S., Khoromskij, B.N., Oseledets, I.V., Savostyanov, D.V.: Computation of extreme eigenvalues in higher dimensions using block tensor train format. Comput. Phys. Commun. 185, 1207–1216 (2014)

    Article  MATH  Google Scholar 

  13. Drineas, P., Mahoney, M.W.: A randomized algorithm for a tensor-based generalization of the singular value decomposition. Linear Algebra Appl. 420, 553–571 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  14. Eldén, L., Savas, B.: A Newton-Grassmann method for computing the best multilinear rank-(r 1,r 2,r 3) approximation of a tensor. SIAM J. Matrix Anal. Appl. 31, 248–271 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  15. Friedlander, M.P., Hatz, K.: Computing non-negative tensor factorizations. Optim. Methods Softw. 23, 631–647 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  16. Golub, G.H., Van Loan, C.F.: Matrix Computations, 4th adn. Johns Hopkins University Press, Baltimore (2013)

  17. Goreinov, S.A., Oseledets, I.V., savostyanov, D.V.: Wedderburn rank reduction and Krylov subspace method for tensor approximation. Part 1: Tucker case. SIAM J. Sci. Comput. 34, A1–A27 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  18. Goreinov, S.A., Tyrtyshnikov, E.E.: The maximal-volume concept in approximation by low-rank matrices, Structured Matrices in Mathematics Computer Science and Engineering I, pp. 47–51 (2001)

  19. Grasedyck, L.: Hierarchical singular value decomposition of tensors. SIAM J. Matrix Anal. Appl. 31, 2029–2054 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  20. Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM-Mitt. 36, 53–78 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  21. Hackbusch, W., Kühn, S.: A new scheme for the tensor representation. J. Fourier Anal. Appl. 15, 706–722 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  22. Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53, 217–288 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  23. Hansen, P.C.: Regularization Tools version 4.0 for Matlab 7.3. Numer. Algorithms 46, 189–194 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  24. Holtz, S., Rohwedder, T., Schneider, R.: The alternating linear scheme for tensor optimization in the tensor train format. SIAM J. Sci. Comput. 34, A683–A713 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  25. Holtz, S.: On manifolds of tensors of fixed TT-rank. Numer. Math. 120, 701–731 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  26. Ishteva, M., Absil, P.-A., Van Huffel, S., De Lathauwer, L.: Best low multilinear rank approximation of higher-order tensors, based on the Riemannian trust-region scheme. SIAM J. Matrix Anal. Appl. 32, 115–135 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  27. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51, 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  28. Kressner, D., Steinlechner, M., Uschmajew, A.: Low-rank tensor methods with subspace correction for symmetric eigenvalue problems. SIAM J. Sci. Comput. 36, A2346–A2368 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  29. Lee, N., Cichocki, A.: Estimating a few extreme singular values and vectors for large-scale matrices in tensor train format. SIAM J. Matrix Anal. Appl. 36, 994–1014 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  30. Lim, L.: Singular values and eigenvalues of tensors: a variational approach. In: IEEE CAMSAP 2005: First International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, pp. 129–132 (2005)

  31. Liu, S., Trenkler, O.: Hadamard, Khatri-rao, Kronecker and other matrix products, 4 pp. 160–177 (2008)

  32. Mahoney, M.W., Maggioni, M., Drineas, P.: Tensor-CUR decompositions for tensor-based data. SIAM J. Matrix Anal. Appl. 30, 957–987 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  33. Navasca, C., De Lathauwer, L.: Low multilinear rank tensor approximation via semidefinite programming. In: IEEE 17th European Signal Processing Conference, pp. 520–524 (2009)

  34. Nguyen, N.H., Drineas, P., Tran, T.D.: Tensor sparsification via a bound on the spectral norm of random tensors. Inf. Inference 4, 195–229 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  35. Nie, J., Wang, L.: Semidefinite relaxations for best rank-1 tensor approximations. SIAM J. Matrix Anal. Appl. 35, 1155–1179 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  36. Oseledets, I., Tyrtyshnikov, E.: TT-cross approximation for multidimensional arrays. Linear Algebra Appl. 432, 70–88 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  37. Oseledets, I.V.: TT-toolbox version 2.2: Fast multidimensional array operations in MATLAB. Available online. http://github.com/oseledets/TT-Toolbox (2009–2013)

  38. Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33, 2295–2317 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  39. Oseledets, I.V., Dolgov, S.V.: Solution of linear systems and matrix inversion in the TT-format. SIAM J. Sci. Comput. 34, A2718–A2739 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  40. Oseledets, I.V., Savostianov, D.V., Tyrtyshnikov, E.E.: Tucker dimensionality reduction of three-dimensional arrays in linear time. SIAM J. Matrix Anal. Appl. 30, 939–956 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  41. Oseledets, I.V.: Cross approximation in tensor electron density computations. Numer. Linear Algebra Appl. 17, 935–952 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  42. Oseledets, I.V., Tyrtyshnikov, E.E.: Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J. Sci. Comput. 31, 3744–3759 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  43. Reynolds, M., Doostan, A., Beylkin, G.: Randomized alternating least squares for canonical tensor decompositions: application to a PDE with random data. SIAM J. Sci. Comput. 38, A2634–A2664 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  44. Saibaba, A.K.: HOID: higher order interpolatory decomposition for tensors based on Tucker representation. SIAM J. Matrix Anal. Appl. 37, 1223–1249 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  45. Savas, B., Eldén, L.: Krylov-type methods for tensor computations I. Linear Algebra Appl. 438, 891–918 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  46. Savas, B., Lim, L.-H.: Quasi-Newton methods on Grassmannians and multilinear approximations of tensors. SIAM J. Sci. Comput. 32, 3352–3393 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  47. Savostyanov, D., Oseledets, I.: Fast adaptive interpolation of multi-dimensional arrays in tensor train format, in the 2011 International Workshop on Multidimensional (nD) Systems, 2011, pp. 1–8. https://doi.org/10.1109/nDS.2011.6076873

  48. Savostyanov, D.V.: Quasioptimality of maximum-volume cross interpolation of tensors. Linear Algebra Appl. 458, 217–244 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  49. Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31, 279–311 (1966)

    Article  MathSciNet  Google Scholar 

  50. Vannieuwenhoven, N., Vandebril, R., Meerbergen, K.: A new truncation strategy for the higher-order singular value decomposition. SIAM J. Sci. Comput. 34, A1027–A1052 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  51. Vershynin, R.: Introduction to the non-asymptotic analysis of random matrices, in Compressed Sensing: Theory and Practice. In: Eldar, Y. C., Kutyniok, G. (eds.) , pp. 210–268. Cambridge University Press, Cambridge (2012)

  52. Vervliet, N., Debals, O., Sorber, L., De Lathauwer, L.: Breaking the curse of dimensionality using decompositions of incomplete tensors: tensor-based scientific computing in big data analysis. IEEE Signal Process. Mag. 31, 71–79 (2014)

    Article  Google Scholar 

  53. Vervliet, N., Debals, O., Sorber, L., Van Barel, M., De Lathauwer, L.: Tensorlab 3.0. Available online. http://tensorlab.net (2016)

  54. Zhang, Y., Zhou, G., Zhao, Q., Cichocki, A., Wang, X.: Fast nonnegative tensor factorization based on accelerated proximal gradient and low-rank approximation. Neurocomputing 198, 148–154 (2016)

    Article  Google Scholar 

  55. Zhou, G., Cichocki, A., Xie, S.: Fast nonnegative matrix/tensor factorization based on low-rank approximation. IEEE Trans. Signal Process. 60, 2928–2940 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  56. Zhou, G.: Decomposition of big tensors with low multilinear rank, arXiv:1412.1885v1 (2014)

  57. Zhou, G., Cichocki, A., Zhao, Q., Xie, S.: Efficient nonnegative tucker decompositions: algorithms and uniqueness. IEEE Trans. Image Process. 24, 4990–5003 (2015)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the editor and two anonymous referees for their careful and detailed comments on our paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yimin Wei.

Additional information

Communicated by: Ivan Oseledets

This author is supported by the Fundamental Research Funds for the Central Universities under grant JBK1801058.

This author is supported by the National Natural Science Foundation of China under grant 11771099 and International Cooperation Project of Shanghai Municipal Science and Technology Commission under grant 16510711200 and Shanghai Municipal Education Committee.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Che, M., Wei, Y. Randomized algorithms for the approximations of Tucker and the tensor train decompositions. Adv Comput Math 45, 395–428 (2019). https://doi.org/10.1007/s10444-018-9622-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10444-018-9622-8

Keywords

Mathematics Subject Classification (2010)

Navigation