skip to main content
research-article
Open Access

Solving Linear Programs in the Current Matrix Multiplication Time

Authors Info & Claims
Published:05 January 2021Publication History
Skip Abstract Section

Abstract

This article shows how to solve linear programs of the form minAx=b,x≥ 0 c x with n variables in time

O*((nω+n2.5−α/2+n2+1/6) log (n/δ)), where ω is the exponent of matrix multiplication, α is the dual exponent of matrix multiplication, and δ is the relative accuracy. For the current value of ω δ 2.37 and α δ 0.31, our algorithm takes O*(nω log (n/δ)) time. When ω = 2, our algorithm takes O*(n2+1/6 log (n/δ)) time.

Our algorithm utilizes several new concepts that we believe may be of independent interest:

• We define a stochastic central path method.

• We show how to maintain a projection matrix √ WA (AWA)−1AW in sub-quadratic time under \ell2 multiplicative changes in the diagonal matrix W.

References

  1. Josh Alman. 2019. Limits on the universal method for matrix multiplication. In Proceedings of the 34th Computational Complexity Conference (CCC’19).Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Josh Alman and Virginia Vassilevska Williams. 2018. Limits on all known (and some unknown) approaches to matrix multiplication. In Proceedings of the IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS’18). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  3. Andris Ambainis, Yuval Filmus, and François Le Gall. 2015. Fast matrix multiplication: Limitations of the Coppersmith-Winograd method. In Proceedings of the 47th Annual ACM Symposium on Theory of Computing (STOC’15). ACM, 585--593.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Jan van den Brand, Binghui Peng, Zhao Song, and Omri Weinstein. 2020. Training (overparametrized) neural networks in near-linear time. Retrieved from https://arxiv.org/pdf/2006.11648.pdf.Google ScholarGoogle Scholar
  5. Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, and Yuanzhi Li. 2018. An homotopy method for ℓp regression provably beyond self-concordance and in input-sparsity time. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC’18). ACM, 1130--1137.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Diptarka Chakraborty, Lior Kamma, and Kasper Green Larsen. 2018. Tight cell probe bounds for succinct boolean matrix-vector multiplication. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC’18). 1297--1306.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Beidi Chen, Zichang Liu, Binghui Peng, Zhaozhuo Xu, Jonathan Lingjie Li, Tri Dao, Zhao Song, Anshumali Shrivastava, and Christopher Re. 2020. MONGOOSE: A learnable LSH framework for efficient neural network training. In OpenReview.net. Retrieved from https://openreview.net/forum?id=wWK7yXkULyh.Google ScholarGoogle Scholar
  8. Matthias Christandl, François Le Gall, Vladimir Lysikov, and Jeroen Zuiddam. 2020. Barriers for rectangular matrix multiplication. In Proceedings of the Computational Complexity Conference (CCC’20). Retrieved from https://arXiv.org/pdf/2003.03019.pdf.Google ScholarGoogle Scholar
  9. Paul Christiano, Jonathan A. Kelner, Aleksander Madry, Daniel A. Spielman, and Shang-Hua Teng. 2011. Electrical flows, laplacian systems, and faster approximation of maximum flow in undirected graphs. In Proceedings of the 43rd Annual ACM Symposium on Theory of Computing (STOC’11). ACM, 273--282.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Kenneth L. Clarkson and David P. Woodruff. 2013. Low rank approximation and regression in input sparsity time. In Proceedings of the Symposium on Theory of Computing Conference (STOC’13). https://arxiv.org/pdf/1207.6365, 81--90.Google ScholarGoogle Scholar
  11. Michael B. Cohen, Jonathan Kelner, Rasmus Kyng, John Peebles, Richard Peng, Anup B. Rao, and Aaron Sidford. 2018. Solving directed laplacian systems in nearly-linear time through sparse LU factorizations. In Proceedings of the IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS’18). IEEE, 898--909.Google ScholarGoogle ScholarCross RefCross Ref
  12. Michael B. Cohen, Rasmus Kyng, Gary L. Miller, Jakub W. Pachocki, Richard Peng, Anup B. Rao, and Shen Chen Xu. 2014. Solving sdd linear systems in nearly m log1/2 n time. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing (STOC’14). ACM, 343--352.Google ScholarGoogle Scholar
  13. Michael B. Cohen, Yin Tat Lee, Gary Miller, Jakub Pachocki, and Aaron Sidford. 2016. Geometric median in nearly linear time. In Proceedings of the 48th Annual ACM Symposium on Theory of Computing (STOC’16). ACM, 9--21.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Michael B. Cohen, Aleksander Mądry, Piotr Sankowski, and Adrian Vladu. 2017. Negative-weight shortest paths and unit capacity minimum cost flow in Õ (m10/7 log W) time. In Proceedings of the 28th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’17). SIAM, 752--771.Google ScholarGoogle ScholarCross RefCross Ref
  15. Michael B. Cohen, Aleksander Madry, Dimitris Tsipras, and Adrian Vladu. 2017. Matrix scaling and balancing via box constrained Newton’s method and interior point methods. In Proceedings of the 58th Annual Symposium on Foundations of Computer Science (FOCS’17). IEEE, 902--913.Google ScholarGoogle ScholarCross RefCross Ref
  16. Don Coppersmith and Shmuel Winograd. 1987. Matrix multiplication via arithmetic progressions. In Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing (STOC’87). ACM, 1--6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Alexander Munro Davie and Andrew James Stothers. 2013. Improved bound for complexity of matrix multiplication. Proceedings of the Royal Society of Edinburgh Section A: Mathematics 143, 2 (2013), 351--369.Google ScholarGoogle ScholarCross RefCross Ref
  18. Monika Henzinger, Sebastian Krinninger, Danupon Nanongkai, and Thatchaphol Saranurak. 2015. Unifying and strengthening hardness for dynamic problems via the online matrix-vector multiplication conjecture. In Proceedings of the 47th Annual ACM Symposium on Theory of Computing (STOC’15). 21--30.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Haotian Jiang, Tarun Kathuria, Yin Tat Lee, Swati Padmanabhan, and Zhao Song. 2020. A faster interior point method for semidefinite programming. In Proceedings of the 61st Annual IEEE Symposium on Foundations of Computer Science (FOCS’20).Google ScholarGoogle ScholarCross RefCross Ref
  20. Haotian Jiang, Yin Tat Lee, Zhao Song, and Sam Chiu-wai Wong. 2020. An improved cutting plane method for convex optimization, convex-concave games and its applications. In Proceedings of the 52nd Annual ACM Symposium on Theory of Computing (STOC’20). 944--953. Retrieved from https://arxiv.org/pdf/2004.04250.pdf.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Shunhua Jiang, Zhao Song, Omri Weinstein, and Hengjie Zhang. 2020. Faster dynamic matrix inverse for faster LPs. Retrieved from https://arxiv.org/2004.07470.Google ScholarGoogle Scholar
  22. Narendra Karmarkar. 1984. A new polynomial-time algorithm for linear programming. In Proceedings of the 16th Annual ACM Symposium on Theory of Computing (STOC’84). ACM, 302--311.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Jonathan A. Kelner, Lorenzo Orecchia, Aaron Sidford, and Zeyuan Allen Zhu. 2013. A simple, combinatorial algorithm for solving SDD systems in nearly-linear time. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (STOC’13). ACM, 911--920.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Ioannis Koutis, Gary L. Miller, and Richard Peng. 2010. Approaching optimality for solving SDD linear systems. In Proceedings of the IEEE 51st Annual Symposium on Foundations of Computer Science (FOCS’10). IEEE, 235--244.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Ioannis Koutis, Gary L. Miller, and Richard Peng. 2011. A nearly-m log n time solver for sdd linear systems. In Proceedings of the IEEE 52nd Annual Symposium on Foundations of Computer Science (FOCS’11). IEEE, 590--598.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Rasmus Kyng, Yin Tat Lee, Richard Peng, Sushant Sachdeva, and Daniel A. Spielman. 2016. Sparsified cholesky and multigrid solvers for connection laplacians. In Proceedings of the 48th Annual ACM Symposium on Theory of Computing (STOC’16). ACM, 842--850.Google ScholarGoogle Scholar
  27. Rasmus Kyng, Richard Peng, Robert Schwieterman, and Peng Zhang. 2018. Incomplete nested dissection. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC’18). ACM, 404--417.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Rasmus Kyng and Sushant Sachdeva. 2016. Approximate gaussian elimination for laplacians-fast, sparse, and simple. In Proceedings of the IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS’16). IEEE, 573--582.Google ScholarGoogle ScholarCross RefCross Ref
  29. Kasper Green Larsen and Ryan Williams. 2017. Faster online matrix-vector multiplication. In Proceedings of the 28th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’17). SIAM, 2182--2189.Google ScholarGoogle ScholarCross RefCross Ref
  30. François Le Gall. 2014. Powers of tensors and fast matrix multiplication. In Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation (ISSAC’14). ACM, 296--303.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Francois Le Gall and Florent Urrutia. 2018. Improved rectangular matrix multiplication using powers of the Coppersmith-Winograd tensor. In Proceedings of the 29th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’18). SIAM, 1029--1046.Google ScholarGoogle ScholarCross RefCross Ref
  32. Yin Tat Lee and Aaron Sidford. 2013. Path finding I: Solving linear programs with Õ (√ rank) linear system solves. Retrieved from https://arXiv:1312.6677.Google ScholarGoogle Scholar
  33. Yin Tat Lee and Aaron Sidford. 2014. Path finding methods for linear programming: Solving linear programs in O(√ {rank) iterations and faster algorithms for maximum flow. In Proceedings of the IEEE 55th Annual Symposium on Foundations of Computer Science (FOCS’14). IEEE, 424--433.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Yin Tat Lee and Aaron Sidford. 2015. Efficient inverse maintenance and faster algorithms for linear programming. In Proceedings of the 56th Annual Symposium on Foundations of Computer Science (FOCS’15). IEEE, 230--249.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Yin Tat Lee, Aaron Sidford, and Sam Chiu-wai Wong. 2015. A faster cutting plane method and its implications for combinatorial and convex optimization. In Proceedings of the 56th Annual Symposium on Foundations of Computer Science (FOCS’15). IEEE, 1049--1065.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Yin Tat Lee, Zhao Song, and Qiuyi Zhang. 2019. Solving empirical risk minimization in the current matrix multiplication time. In Proceedings of the Conference on Learning Theory (COLT’19). 2140--2157. Retrieved from https://arxiv.org/pdf/1905.04447.pdf.Google ScholarGoogle Scholar
  37. S Cliff Liu, Zhao Song, and Hengjie Zhang. 2020. Breaking the n-pass barrier: A streaming algorithm for maximum weight bipartite matching. Retrieved from https://arXiv:2009.06106.Google ScholarGoogle Scholar
  38. Aleksander Madry. 2013. Navigating central path with electrical flows: From flows to matchings, and back. In Proceedings of the IEEE 54th Annual Symposium on Foundations of Computer Science (FOCS’13). IEEE, 253--262.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Aleksander Madry. 2016. Computing maximum flow with augmenting electrical flows. In Proceedings of the IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS’16). IEEE, 593--602.Google ScholarGoogle ScholarCross RefCross Ref
  40. Nimrod Megiddo. 2012. Progress in Mathematical Programming: Interior-Point and Related Methods. Springer Science & Business Media.Google ScholarGoogle Scholar
  41. Jelani Nelson and Huy L. Nguyên. 2013. OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings. In Proceedings of the IEEE 54th Annual Symposium on Foundations of Computer Science (FOCS’13). IEEE, 117--126. Retrieved from https://arxiv.org/pdf/1211.1002.Google ScholarGoogle Scholar
  42. Yurii Nesterov and Arkadii Nemirovskii. 1994. Interior-point Polynomial Algorithms in Convex Programming. Vol. 13. Siam.Google ScholarGoogle Scholar
  43. Yu Nesterov and Arkadi Nemirovsky. 1989. Self-concordant functions and polynomial-time methods in convex programming. Technical Report, Central Economic and Mathematic Institute, USSR Academy of Science.Google ScholarGoogle Scholar
  44. Yu Nesterov and Arkadi Nemirovsky. 1991. Acceleration and parallelization of the path-following interior point method for a linearly constrained convex quadratic problem. SIAM J. Optimiz. 1, 4 (1991), 548--564.Google ScholarGoogle ScholarCross RefCross Ref
  45. Victor Pan. 1984. How to multiply matrices faster. Lecture Notes Comput. Sci. 179 (1984).Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Jiming Peng, Cornelis Roos, and Tamás Terlaky. 2002. Self-regular functions and new search directions for linear and semidefinite optimization. Math. Program. 93, 1 (2002), 129--171.Google ScholarGoogle ScholarCross RefCross Ref
  47. James Renegar. 1988. A polynomial-time algorithm, based on Newton’s method, for linear programming. Math. Program. 40, 1–3 (1988), 59--93.Google ScholarGoogle Scholar
  48. James Renegar. 2001. A Mathematical View of Interior-point Methods in Convex Optimization. Vol. 3. Siam.Google ScholarGoogle Scholar
  49. Cornelis Roos, Tamás Terlaky, and J.-Ph. Vial. 2005. Interior Point Methods for Linear Optimization. Springer Science & Business Media.Google ScholarGoogle Scholar
  50. Tamás Sarlós. 2006. Improved approximation algorithms for large matrices via random projections. In Proceedings of the 47th Annual Symposium on Foundations of Computer Science (FOCS’06). IEEE, 143--152.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Zhao Song and Zheng Yu. 2020. Oblivious sketching-based central path method for solving linear programming problems. In OpenReview.net. Retrieved from https://openreview.net/forum?id=fGiKxvF-eub.Google ScholarGoogle Scholar
  52. Daniel A. Spielman and Nikhil Srivastava. 2011. Graph sparsification by effective resistances. SIAM J. Comput. 40, 6 (2011), 1913--1926.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Daniel A. Spielman and Shang-Hua Teng. 2004. Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. In Proceedings of the 36th Annual ACM Symposium on Theory of Computing (STOC’04). ACM, 81--90.Google ScholarGoogle Scholar
  54. Tamás Terlaky. 2013. Interior Point Methods of Mathematical Programming. Vol. 5. Springer Science & Business Media.Google ScholarGoogle Scholar
  55. Pravin M. Vaidya. 1987. An algorithm for linear programming which requires O(((m+ n) n2+ (m+ n)1.5 n) L) arithmetic operations. In Proceedings of the 19th Annual ACM Symposium on Theory of Computing (STOC’87). ACM, 29--38.Google ScholarGoogle Scholar
  56. Pravin M. Vaidya. 1989. A new algorithm for minimizing convex functions over convex sets. In Proceedings of the 30th Annual Symposium on Foundations of Computer Science (FOCS’89). IEEE, 338--343.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Pravin M. Vaidya. 1989. Speeding-up linear programming using fast matrix multiplication. In Proceedings of the 30th Annual Symposium on Foundations of Computer Science (FOCS’89). IEEE, 332--337.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Jan van den Brand. 2020. A deterministic linear program solver in current matrix multiplication time. In Proceedings of the 14th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’20). SIAM, 259--278.Google ScholarGoogle ScholarCross RefCross Ref
  59. Jan van den Brand, Yin Tat Lee, Aaron Sidford, and Zhao Song. 2020. Solving tall dense linear programs in nearly linear time. In Proceedings of the 52nd Annual ACM Symposium on Theory of Computing (STOC’20). 775--788.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Virginia Vassilevska Williams. 2012. Multiplying matrices faster than Coppersmith-Winograd. In Proceedings of the 44th Annual ACM Symposium on Theory of Computing (STOC’12). ACM, 887--898.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Max A. Woodbury. 1950. Inverting modified matrices. Memo. Report 42, 106 (1950), 336.Google ScholarGoogle Scholar
  62. Stephen J. Wright. 1997. Primal-dual Interior-point Methods. Vol. 54. SIAM.Google ScholarGoogle Scholar
  63. Yinyu Ye. 1997. Interior Point Algorithms: Theory and Analysis. Springer.Google ScholarGoogle Scholar
  64. Yinyu Ye, Michael J. Todd, and Shinji Mizuno. 1994. An O(√ nL)-iteration homogeneous and self-dual linear programming algorithm. Math. Oper. Res. 19, 1 (1994), 53--67.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Solving Linear Programs in the Current Matrix Multiplication Time

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image Journal of the ACM
      Journal of the ACM  Volume 68, Issue 1
      February 2021
      215 pages
      ISSN:0004-5411
      EISSN:1557-735X
      DOI:10.1145/3437069
      Issue’s Table of Contents

      Copyright © 2021 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 5 January 2021
      • Revised: 1 September 2020
      • Accepted: 1 September 2020
      • Received: 1 June 2019
      Published in jacm Volume 68, Issue 1

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format