Abstract
The Sampling Kaczmarz Motzkin (SKM) algorithm is a generalized method for solving large-scale linear systems of inequalities. Having its root in the relaxation method of Agmon, Schoenberg, and Motzkin and the randomized Kaczmarz method, SKM outperforms the state-of-the-art methods in solving large-scale Linear Feasibility (LF) problems. Motivated by SKM’s success, in this work, we propose an Accelerated Sampling Kaczmarz Motzkin (ASKM) algorithm which achieves better convergence compared to the standard SKM algorithm on ill-conditioned problems. We provide a thorough convergence analysis for the proposed accelerated algorithm and validate the results with various numerical experiments. We compare the performance and effectiveness of ASKM algorithm with SKM, Interior Point Method (IPM) and Active Set Method (ASM) on randomly generated instances as well as Netlib LPs. In most of the test instances, the proposed ASKM algorithm outperforms the other state-of-the-art methods.
Similar content being viewed by others
Notes
\(\delta \) here is same as \(\lambda \) in De Loera et al. [9].
References
Strohmer, T., Vershynin, R.: A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl. 15(2), 262 (2008)
Leventhal, D., Lewis, A.S.: Randomized methods for linear constraints: convergence rates and conditioning. Math. Oper. Res. 35(3), 641–654 (2010)
Needell, D.: Randomized Kaczmarz solver for noisy linear systems. BIT Numer. Math. 50(2), 395–403 (2010)
Drineas, P., Mahoney, M.W., Muthukrishnan, S., Sarlós, T.: Faster least squares approximation. Numer. Math. 117(2), 219–249 (2011)
Zouzias, A., Freris, N.M.: Randomized extended Kaczmarz for solving least squares. SIAM J. Matrix Anal. Appl. 34(2), 773–793 (2013)
Lee, Y.T., Sidford, A.: Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. In: Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, FOCS’13, Washington, DC, USA. IEEE Computer Society, pp. 147–156 (2013)
Ma, A., Needell, D., Ramdas, A.: Convergence properties of the randomized extended gauss seidel and Kaczmarz methods. SIAM J. Matrix Anal. Appl. 36(4), 1590–1604 (2015)
Qu, Z., Richtarik, P., Takac, M., Fercoq, O.: SDNA: stochastic dual Newton ascent for empirical risk minimization. In: Proceedings of the 33rd International Conference on Machine Learning, vol. 48, New York, USA, 20–22 June 2016. PMLR, pp. 1823–1832 (2016)
De Loera, J., Haddock, J., Needell, D.: A sampling Kaczmarz–Motzkin algorithm for linear feasibility. SIAM J. Sci. Comput. 39(5), S66–S87 (2017)
Dembo, R.S., Eisenstat, S.C., Steihaug, T.: Inexact Newton methods. SIAM J. Numer. Anal. 19(2), 400–408 (1982)
Eisenstat, S.C., Walker, H.F.: Choosing the forcing terms in an inexact Newton method. SIAM J. Sci. Comput. 17(1), 16–32 (1996)
Bellavia, S.: Inexact interior-point method. J. Optim. Theory Appl. 96(1), 109–121 (1998)
Zhao, X.-Y., Sun, D., Toh, K.-C.: A Newton-CG augmented lagrangian method for semidefinite programming. SIAM J. Optim. 20(4), 1737–1765 (2010)
Wang, C., Aimin, X.: An inexact accelerated proximal gradient method and a dual Newton-CG method for the maximal entropy problem. J. Optim. Theory Appl. 157(2), 436–450 (2013)
Jiang, K., Sun, D., Toh, K.-C.: An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP. SIAM J. Optim. 22(3), 1042–1064 (2012)
Gondzio, J.: Convergence analysis of an inexact feasible interior point method for convex quadratic programming. SIAM J. Optim. 23(3), 1510–1527 (2013)
Kaczmarz, S.: Angenaherte auflsung von systemen linearer gleichungen. Bull. Int. Acad. Pol. Sci. Lett. 35, 355–357 (1937)
Herman, G.T.: Fundamentals of Computerized Tomography: Image Reconstruction from Projections, 2nd edn. Springer, New York (2009)
Censor, Y.: Row-action methods for huge and sparse systems and their applications. SIAM Rev. 23(4), 444–466 (1981)
Liu, J., Wright, S.J.: An accelerated randomized Kaczmarz algorithm. Math. Comput. 85(297), 153–178 (2016)
Agamon, S.: The relaxation method for linear inequalities. Can. J. Math. 6, 382–392 (1954)
Motzkin, T.S., Schoenberg, I.J.: The relaxation method for linear inequalities. Can. J. Math. 6, 393–404 (1954)
Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 65–386 (1958)
Ramdas, A., Peña, J.: Margins, kernels and non-linear smoothed perceptrons. In: Proceedings of the 31st International Conference on Machine Learning, vol. 32, Beijing, China, 22–24 June. PMLR, pp. 244–252 (2014)
Ramdas, A., Peña, J.: Towards a deeper geometric, analytic and algorithmic understanding of margins. Optim. Methods Softw. 31(2), 377–391 (2016)
Nutini, J., Sepehry, B., Laradji, I., Schmidt, M., Koepke, H., Virani, A.: Convergence rates for greedy Kaczmarz algorithms, and faster randomized Kaczmarz rules using the orthogonality graph. In: Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence, UAI’16, Arlington, Virginia, USA, 2016. AUAI Press, pp. 547–556 (2016)
Petra, S., Popa, C.: Single projection Kaczmarz extended algorithms. Numer. Algorithms 73(3), 791–806 (2016)
Hoffman, A.J.: On approximate solutions of systems of linear inequalities. In: Selected Papers of Alan J Hoffman: With Commentary. World Scientific, pp. 174–176 (2003)
Xu Xiang, X., Liu, W.T., Dai, X.: An accelerated randomized extended Kaczmarz algorithm. J. Phys.: Conf. Ser. 814(1), 012017 (2017)
Morshed, M.S., Noor-E-Alam, M.: Generalized affine scaling algorithms for linear programming problems. Comput. Oper. Res. 114, 104807 (2020)
Nesterov, Y.: A method for solving the convex programming problem with convergence rate \(o(1/k^{2})\). Soviet Mathematics Doklady, vol. 27, pp. 372–376 (1983)
Nesterov, Y.: Gradient methods for minimizing composite functions. Math. Program. 140(1), 125–161 (2013)
Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course, 1st edn. Springer, New York (2014)
Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)
Nesterov, Y.: Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J. Optim. 22(2), 341–362 (2012)
Gordon, R., Bender, R., Herman, G.T.: Algebraic reconstruction techniques (art) for three-dimensional electron microscopy and x-ray photography. J. Theor. Biol. 29(3), 471–481 (1970)
Calafiore, G., El Ghaoui, L.: Optimization Models. Control Systems and Optimization Series. Cambridge University Press, Cambridge (2014)
Lichman, M.: UCI machine learning repository. (2013). http://archive.ics.uci.edu/ml
Yeh, I.-C., Lien, C.: The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Syst. Appl. 36(2), 2473–2480 (2009)
Acknowledgements
The authors are truly grateful to the anonymous referees and the editor for their valuable comments and suggestions in the revision process which have greatly improved the quality of this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Morshed, M.S., Islam, M.S. & Noor-E-Alam, M. Accelerated sampling Kaczmarz Motzkin algorithm for the linear feasibility problem. J Glob Optim 77, 361–382 (2020). https://doi.org/10.1007/s10898-019-00850-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10898-019-00850-6