Abstract
This article shows how to solve linear programs of the form minAx=b,x≥ 0 c⊤ x with n variables in time
O*((nω+n2.5−α/2+n2+1/6) log (n/δ)), where ω is the exponent of matrix multiplication, α is the dual exponent of matrix multiplication, and δ is the relative accuracy. For the current value of ω δ 2.37 and α δ 0.31, our algorithm takes O*(nω log (n/δ)) time. When ω = 2, our algorithm takes O*(n2+1/6 log (n/δ)) time.
Our algorithm utilizes several new concepts that we believe may be of independent interest:
• We define a stochastic central path method.
• We show how to maintain a projection matrix √ WA⊤ (AWA⊤)−1A√ W in sub-quadratic time under \ell2 multiplicative changes in the diagonal matrix W.
- Josh Alman. 2019. Limits on the universal method for matrix multiplication. In Proceedings of the 34th Computational Complexity Conference (CCC’19).Google ScholarDigital Library
- Josh Alman and Virginia Vassilevska Williams. 2018. Limits on all known (and some unknown) approaches to matrix multiplication. In Proceedings of the IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS’18). IEEE.Google ScholarCross Ref
- Andris Ambainis, Yuval Filmus, and François Le Gall. 2015. Fast matrix multiplication: Limitations of the Coppersmith-Winograd method. In Proceedings of the 47th Annual ACM Symposium on Theory of Computing (STOC’15). ACM, 585--593.Google ScholarDigital Library
- Jan van den Brand, Binghui Peng, Zhao Song, and Omri Weinstein. 2020. Training (overparametrized) neural networks in near-linear time. Retrieved from https://arxiv.org/pdf/2006.11648.pdf.Google Scholar
- Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, and Yuanzhi Li. 2018. An homotopy method for ℓp regression provably beyond self-concordance and in input-sparsity time. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC’18). ACM, 1130--1137.Google ScholarDigital Library
- Diptarka Chakraborty, Lior Kamma, and Kasper Green Larsen. 2018. Tight cell probe bounds for succinct boolean matrix-vector multiplication. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC’18). 1297--1306.Google ScholarDigital Library
- Beidi Chen, Zichang Liu, Binghui Peng, Zhaozhuo Xu, Jonathan Lingjie Li, Tri Dao, Zhao Song, Anshumali Shrivastava, and Christopher Re. 2020. MONGOOSE: A learnable LSH framework for efficient neural network training. In OpenReview.net. Retrieved from https://openreview.net/forum?id=wWK7yXkULyh.Google Scholar
- Matthias Christandl, François Le Gall, Vladimir Lysikov, and Jeroen Zuiddam. 2020. Barriers for rectangular matrix multiplication. In Proceedings of the Computational Complexity Conference (CCC’20). Retrieved from https://arXiv.org/pdf/2003.03019.pdf.Google Scholar
- Paul Christiano, Jonathan A. Kelner, Aleksander Madry, Daniel A. Spielman, and Shang-Hua Teng. 2011. Electrical flows, laplacian systems, and faster approximation of maximum flow in undirected graphs. In Proceedings of the 43rd Annual ACM Symposium on Theory of Computing (STOC’11). ACM, 273--282.Google ScholarDigital Library
- Kenneth L. Clarkson and David P. Woodruff. 2013. Low rank approximation and regression in input sparsity time. In Proceedings of the Symposium on Theory of Computing Conference (STOC’13). https://arxiv.org/pdf/1207.6365, 81--90.Google Scholar
- Michael B. Cohen, Jonathan Kelner, Rasmus Kyng, John Peebles, Richard Peng, Anup B. Rao, and Aaron Sidford. 2018. Solving directed laplacian systems in nearly-linear time through sparse LU factorizations. In Proceedings of the IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS’18). IEEE, 898--909.Google ScholarCross Ref
- Michael B. Cohen, Rasmus Kyng, Gary L. Miller, Jakub W. Pachocki, Richard Peng, Anup B. Rao, and Shen Chen Xu. 2014. Solving sdd linear systems in nearly m log1/2 n time. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing (STOC’14). ACM, 343--352.Google Scholar
- Michael B. Cohen, Yin Tat Lee, Gary Miller, Jakub Pachocki, and Aaron Sidford. 2016. Geometric median in nearly linear time. In Proceedings of the 48th Annual ACM Symposium on Theory of Computing (STOC’16). ACM, 9--21.Google ScholarDigital Library
- Michael B. Cohen, Aleksander Mądry, Piotr Sankowski, and Adrian Vladu. 2017. Negative-weight shortest paths and unit capacity minimum cost flow in Õ (m10/7 log W) time. In Proceedings of the 28th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’17). SIAM, 752--771.Google ScholarCross Ref
- Michael B. Cohen, Aleksander Madry, Dimitris Tsipras, and Adrian Vladu. 2017. Matrix scaling and balancing via box constrained Newton’s method and interior point methods. In Proceedings of the 58th Annual Symposium on Foundations of Computer Science (FOCS’17). IEEE, 902--913.Google ScholarCross Ref
- Don Coppersmith and Shmuel Winograd. 1987. Matrix multiplication via arithmetic progressions. In Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing (STOC’87). ACM, 1--6.Google ScholarDigital Library
- Alexander Munro Davie and Andrew James Stothers. 2013. Improved bound for complexity of matrix multiplication. Proceedings of the Royal Society of Edinburgh Section A: Mathematics 143, 2 (2013), 351--369.Google ScholarCross Ref
- Monika Henzinger, Sebastian Krinninger, Danupon Nanongkai, and Thatchaphol Saranurak. 2015. Unifying and strengthening hardness for dynamic problems via the online matrix-vector multiplication conjecture. In Proceedings of the 47th Annual ACM Symposium on Theory of Computing (STOC’15). 21--30.Google ScholarDigital Library
- Haotian Jiang, Tarun Kathuria, Yin Tat Lee, Swati Padmanabhan, and Zhao Song. 2020. A faster interior point method for semidefinite programming. In Proceedings of the 61st Annual IEEE Symposium on Foundations of Computer Science (FOCS’20).Google ScholarCross Ref
- Haotian Jiang, Yin Tat Lee, Zhao Song, and Sam Chiu-wai Wong. 2020. An improved cutting plane method for convex optimization, convex-concave games and its applications. In Proceedings of the 52nd Annual ACM Symposium on Theory of Computing (STOC’20). 944--953. Retrieved from https://arxiv.org/pdf/2004.04250.pdf.Google ScholarDigital Library
- Shunhua Jiang, Zhao Song, Omri Weinstein, and Hengjie Zhang. 2020. Faster dynamic matrix inverse for faster LPs. Retrieved from https://arxiv.org/2004.07470.Google Scholar
- Narendra Karmarkar. 1984. A new polynomial-time algorithm for linear programming. In Proceedings of the 16th Annual ACM Symposium on Theory of Computing (STOC’84). ACM, 302--311.Google ScholarDigital Library
- Jonathan A. Kelner, Lorenzo Orecchia, Aaron Sidford, and Zeyuan Allen Zhu. 2013. A simple, combinatorial algorithm for solving SDD systems in nearly-linear time. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (STOC’13). ACM, 911--920.Google ScholarDigital Library
- Ioannis Koutis, Gary L. Miller, and Richard Peng. 2010. Approaching optimality for solving SDD linear systems. In Proceedings of the IEEE 51st Annual Symposium on Foundations of Computer Science (FOCS’10). IEEE, 235--244.Google ScholarDigital Library
- Ioannis Koutis, Gary L. Miller, and Richard Peng. 2011. A nearly-m log n time solver for sdd linear systems. In Proceedings of the IEEE 52nd Annual Symposium on Foundations of Computer Science (FOCS’11). IEEE, 590--598.Google ScholarDigital Library
- Rasmus Kyng, Yin Tat Lee, Richard Peng, Sushant Sachdeva, and Daniel A. Spielman. 2016. Sparsified cholesky and multigrid solvers for connection laplacians. In Proceedings of the 48th Annual ACM Symposium on Theory of Computing (STOC’16). ACM, 842--850.Google Scholar
- Rasmus Kyng, Richard Peng, Robert Schwieterman, and Peng Zhang. 2018. Incomplete nested dissection. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC’18). ACM, 404--417.Google ScholarDigital Library
- Rasmus Kyng and Sushant Sachdeva. 2016. Approximate gaussian elimination for laplacians-fast, sparse, and simple. In Proceedings of the IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS’16). IEEE, 573--582.Google ScholarCross Ref
- Kasper Green Larsen and Ryan Williams. 2017. Faster online matrix-vector multiplication. In Proceedings of the 28th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’17). SIAM, 2182--2189.Google ScholarCross Ref
- François Le Gall. 2014. Powers of tensors and fast matrix multiplication. In Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation (ISSAC’14). ACM, 296--303.Google ScholarDigital Library
- Francois Le Gall and Florent Urrutia. 2018. Improved rectangular matrix multiplication using powers of the Coppersmith-Winograd tensor. In Proceedings of the 29th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’18). SIAM, 1029--1046.Google ScholarCross Ref
- Yin Tat Lee and Aaron Sidford. 2013. Path finding I: Solving linear programs with Õ (√ rank) linear system solves. Retrieved from https://arXiv:1312.6677.Google Scholar
- Yin Tat Lee and Aaron Sidford. 2014. Path finding methods for linear programming: Solving linear programs in O(√ {rank) iterations and faster algorithms for maximum flow. In Proceedings of the IEEE 55th Annual Symposium on Foundations of Computer Science (FOCS’14). IEEE, 424--433.Google ScholarDigital Library
- Yin Tat Lee and Aaron Sidford. 2015. Efficient inverse maintenance and faster algorithms for linear programming. In Proceedings of the 56th Annual Symposium on Foundations of Computer Science (FOCS’15). IEEE, 230--249.Google ScholarDigital Library
- Yin Tat Lee, Aaron Sidford, and Sam Chiu-wai Wong. 2015. A faster cutting plane method and its implications for combinatorial and convex optimization. In Proceedings of the 56th Annual Symposium on Foundations of Computer Science (FOCS’15). IEEE, 1049--1065.Google ScholarDigital Library
- Yin Tat Lee, Zhao Song, and Qiuyi Zhang. 2019. Solving empirical risk minimization in the current matrix multiplication time. In Proceedings of the Conference on Learning Theory (COLT’19). 2140--2157. Retrieved from https://arxiv.org/pdf/1905.04447.pdf.Google Scholar
- S Cliff Liu, Zhao Song, and Hengjie Zhang. 2020. Breaking the n-pass barrier: A streaming algorithm for maximum weight bipartite matching. Retrieved from https://arXiv:2009.06106.Google Scholar
- Aleksander Madry. 2013. Navigating central path with electrical flows: From flows to matchings, and back. In Proceedings of the IEEE 54th Annual Symposium on Foundations of Computer Science (FOCS’13). IEEE, 253--262.Google ScholarDigital Library
- Aleksander Madry. 2016. Computing maximum flow with augmenting electrical flows. In Proceedings of the IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS’16). IEEE, 593--602.Google ScholarCross Ref
- Nimrod Megiddo. 2012. Progress in Mathematical Programming: Interior-Point and Related Methods. Springer Science & Business Media.Google Scholar
- Jelani Nelson and Huy L. Nguyên. 2013. OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings. In Proceedings of the IEEE 54th Annual Symposium on Foundations of Computer Science (FOCS’13). IEEE, 117--126. Retrieved from https://arxiv.org/pdf/1211.1002.Google Scholar
- Yurii Nesterov and Arkadii Nemirovskii. 1994. Interior-point Polynomial Algorithms in Convex Programming. Vol. 13. Siam.Google Scholar
- Yu Nesterov and Arkadi Nemirovsky. 1989. Self-concordant functions and polynomial-time methods in convex programming. Technical Report, Central Economic and Mathematic Institute, USSR Academy of Science.Google Scholar
- Yu Nesterov and Arkadi Nemirovsky. 1991. Acceleration and parallelization of the path-following interior point method for a linearly constrained convex quadratic problem. SIAM J. Optimiz. 1, 4 (1991), 548--564.Google ScholarCross Ref
- Victor Pan. 1984. How to multiply matrices faster. Lecture Notes Comput. Sci. 179 (1984).Google ScholarDigital Library
- Jiming Peng, Cornelis Roos, and Tamás Terlaky. 2002. Self-regular functions and new search directions for linear and semidefinite optimization. Math. Program. 93, 1 (2002), 129--171.Google ScholarCross Ref
- James Renegar. 1988. A polynomial-time algorithm, based on Newton’s method, for linear programming. Math. Program. 40, 1–3 (1988), 59--93.Google Scholar
- James Renegar. 2001. A Mathematical View of Interior-point Methods in Convex Optimization. Vol. 3. Siam.Google Scholar
- Cornelis Roos, Tamás Terlaky, and J.-Ph. Vial. 2005. Interior Point Methods for Linear Optimization. Springer Science & Business Media.Google Scholar
- Tamás Sarlós. 2006. Improved approximation algorithms for large matrices via random projections. In Proceedings of the 47th Annual Symposium on Foundations of Computer Science (FOCS’06). IEEE, 143--152.Google ScholarDigital Library
- Zhao Song and Zheng Yu. 2020. Oblivious sketching-based central path method for solving linear programming problems. In OpenReview.net. Retrieved from https://openreview.net/forum?id=fGiKxvF-eub.Google Scholar
- Daniel A. Spielman and Nikhil Srivastava. 2011. Graph sparsification by effective resistances. SIAM J. Comput. 40, 6 (2011), 1913--1926.Google ScholarDigital Library
- Daniel A. Spielman and Shang-Hua Teng. 2004. Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. In Proceedings of the 36th Annual ACM Symposium on Theory of Computing (STOC’04). ACM, 81--90.Google Scholar
- Tamás Terlaky. 2013. Interior Point Methods of Mathematical Programming. Vol. 5. Springer Science & Business Media.Google Scholar
- Pravin M. Vaidya. 1987. An algorithm for linear programming which requires O(((m+ n) n2+ (m+ n)1.5 n) L) arithmetic operations. In Proceedings of the 19th Annual ACM Symposium on Theory of Computing (STOC’87). ACM, 29--38.Google Scholar
- Pravin M. Vaidya. 1989. A new algorithm for minimizing convex functions over convex sets. In Proceedings of the 30th Annual Symposium on Foundations of Computer Science (FOCS’89). IEEE, 338--343.Google ScholarDigital Library
- Pravin M. Vaidya. 1989. Speeding-up linear programming using fast matrix multiplication. In Proceedings of the 30th Annual Symposium on Foundations of Computer Science (FOCS’89). IEEE, 332--337.Google ScholarDigital Library
- Jan van den Brand. 2020. A deterministic linear program solver in current matrix multiplication time. In Proceedings of the 14th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’20). SIAM, 259--278.Google ScholarCross Ref
- Jan van den Brand, Yin Tat Lee, Aaron Sidford, and Zhao Song. 2020. Solving tall dense linear programs in nearly linear time. In Proceedings of the 52nd Annual ACM Symposium on Theory of Computing (STOC’20). 775--788.Google ScholarDigital Library
- Virginia Vassilevska Williams. 2012. Multiplying matrices faster than Coppersmith-Winograd. In Proceedings of the 44th Annual ACM Symposium on Theory of Computing (STOC’12). ACM, 887--898.Google ScholarDigital Library
- Max A. Woodbury. 1950. Inverting modified matrices. Memo. Report 42, 106 (1950), 336.Google Scholar
- Stephen J. Wright. 1997. Primal-dual Interior-point Methods. Vol. 54. SIAM.Google Scholar
- Yinyu Ye. 1997. Interior Point Algorithms: Theory and Analysis. Springer.Google Scholar
- Yinyu Ye, Michael J. Todd, and Shinji Mizuno. 1994. An O(√ nL)-iteration homogeneous and self-dual linear programming algorithm. Math. Oper. Res. 19, 1 (1994), 53--67.Google ScholarCross Ref
Index Terms
- Solving Linear Programs in the Current Matrix Multiplication Time
Recommendations
Solving linear programs in the current matrix multiplication time
STOC 2019: Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of ComputingThis paper shows how to solve linear programs of the form minAx=b,x≥0 c⊤x with n variables in time O*((nω+n2.5−α/2+n2+1/6) log(n/δ)) where ω is the exponent of matrix multiplication, α is the dual exponent of matrix multiplication, and δ is the relative ...
Solving sparse linear systems faster than matrix multiplication
SODA '21: Proceedings of the Thirty-Second Annual ACM-SIAM Symposium on Discrete AlgorithmsCan linear systems be solved faster than matrix multiplication? While there has been remarkable progress for the special cases of graph structured linear systems, in the general setting, the bit complexity of solving an n × n linear system Ax = b is Õ(n...
Zigzag persistent homology in matrix multiplication time
SoCG '11: Proceedings of the twenty-seventh annual symposium on Computational geometryWe present a new algorithm for computing zigzag persistent homology, an algebraic structure which encodes changes to homology groups of a simplicial complex over a sequence of simplex additions and deletions. Provided that there is an algorithm that ...
Comments