Abstract
Least squares approximation is a technique to find an approximate solution to a system of linear equations that has no exact solution. In a typical setting, one lets n be the number of constraints and d be the number of variables, with \({n \gg d}\). Then, existing exact methods find a solution vector in O(nd 2) time. We present two randomized algorithms that provide accurate relative-error approximations to the optimal value and the solution vector of a least squares approximation problem more rapidly than existing exact algorithms. Both of our algorithms preprocess the data with the Randomized Hadamard transform. One then uniformly randomly samples constraints and solves the smaller problem on those constraints, and the other performs a sparse random projection and solves the smaller problem on those projected coordinates. In both cases, solving the smaller problem provides relative-error approximations, and, if n is sufficiently larger than d, the approximate solution can be computed in O(nd ln d) time.
Similar content being viewed by others
References
Ailon, N., Chazelle, B.: Approximate nearest neighbors and the fast Johnson–Lindenstrauss transform. In: Proceedings of the 38th Annual ACM Symposium on Theory of Computing, pp. 557–563 (2006)
Ailon, N., Liberty, E.: Fast dimension reduction using Rademacher series on dual BCH codes. In Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1–9, (2008)
Arora, S., Hazan, E., Kale, S.: A fast random sampling algorithm for sparsifying matrices. In: Proceedings of the 10th International Workshop on Randomization and Computation, pp. 272–279 (2006)
Avron, H., Maymounkov, P., Toledo, S.: Blendenpik: Supercharging LAPACK’s least-squares solver. Manuscript (2009)
Avron, H., Maymounkov, P., Toledo, S.: Blendenpik: Supercharging LAPACK’s least-squares solver. SIAM J. Scientific Comput. (2010, to appear)
Ben-Israel A., Greville T.N.E.: Generalized Inverses: Theory and Applications. Springer, New York (2003)
Bhatia R.: Matrix Analysis. Springer-Verlag, New York (1997)
Clarkson, K.L., Woodruff, D.P.: Numerical linear algebra in the streaming model. In: Proceedings of the 41st Annual ACM Symposium on Theory of Computing, pp. 205–214 (2009)
Coppersmith D., Winograd S.: Matrix multiplication via arithmetic progressions. J. Symbolic Comput. 9(3), 251–280 (1990)
Dahlquist G., Sjöberg B., Svensson P.: Comparison of the method of averages with the method of least squares. Math. Comput. 22(104), 833–845 (1968)
Drineas P., Kannan R., Mahoney M.W.: Fast Monte Carlo algorithms for matrices I: approximating matrix multiplication. SIAM J. Comput. 36, 132–157 (2006)
Drineas, P., Mahoney, M.W., Muthukrishnan, S.: Sampling algorithms for ℓ 2 regression and applications. In: Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1127–1136 (2006)
Drineas P., Mahoney M.W., Muthukrishnan S.: Relative-error CUR matrix decompositions. SIAM J. Matrix Anal. Appl. 30, 844–881 (2008)
Drineas, P., Mahoney, M.W., Muthukrishnan, S., Sarlós, T.: Faster least squares approximation. Technical report. Preprint: arXiv:0710.1435 (2007)
Feige U., Ofek E.: Spectral techniques applied to sparse random graphs. Random Struct. Algorithms 27(2), 251–275 (2005)
Golub G.H., Van Loan C.F.: Matrix Computations. Johns Hopkins University Press, Baltimore (1996)
Ibarra O.H., Moran S., Hui R.: A generalization of the fast LUP matrix decomposition algorithm and applications. J. Algorithms 3, 45–56 (1982)
Matoušek J.: On variants of the Johnson–Lindenstrauss lemma. Random Struct. Algorithms 33(2), 142–156 (2008)
Motwani R., Raghavan P.: Randomized Algorithms. Cambridge University Press, New York (1995)
Nguyen, N.H., Do, T.T., Tran, T.D.: A fast and efficient algorithm for low-rank approximation of a matrix. In Proceedings of the 41st Annual ACM Symposium on Theory of Computing, pp. 215–224 (2009)
Oliveira, R.I.: Sums of random Hermitian matrices and an inequality by Rudelson. arXiv:1004.3821v1 (2010)
Rokhlin V., Tygert M.: A fast randomized algorithm for overdetermined linear least-squares regression. Proc. Natl. Acad. Sci. USA 105(36), 13212–13217 (2008)
Rudelson, M., Vershynin, R.: Sampling from large matrices: an approach through geometric functional analysis. J. ACM, 54(4):Article 21 (2007)
Sarlós, T.: Improved approximation algorithms for large matrices via random projections. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science, pp. 143–152 (2006)
Stewart G.W., Sun J.G.: Matrix Perturbation Theory. Academic Press, New York (1990)
Stigler S.M.: The History of Statistics: The Measurement of Uncertainty before 1900. Harvard University Press, Cambridge (1986)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Drineas, P., Mahoney, M.W., Muthukrishnan, S. et al. Faster least squares approximation. Numer. Math. 117, 219–249 (2011). https://doi.org/10.1007/s00211-010-0331-6
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00211-010-0331-6