Skip to main content
Log in

Smoothing methods for convex inequalities and linear complementarity problems

  • Published:
Mathematical Programming Submit manuscript

Abstract

A smooth approximationp (x, α) to the plus function max{x, 0} is obtained by integrating the sigmoid function 1/(1 + eαx), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy forα sufficiently large. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finiteα. Speedup over MINOS 5.4 was as high as 1142 times for linear inequalities of size 2000 × 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems are converted into a system of smooth nonlinear equations and are solved by a quadratically convergent Newton method. For monotone LCPs with as many as 10 000 variables, the proposed approach was as much as 63 times faster than Lemke's method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. R.W. Cottle, J.-S. Pang and R.E. Stone,The Linear Complementarity Problem (Academic Press, New York, 1992).

    Google Scholar 

  2. R. De Leone and M.A. Tork Roth, “Massively parallel solution of quadratic programs via successive overrelaxation,”Concurrency: Practice and Experience 5 (1993) 623–634.

    Google Scholar 

  3. J.E. Dennis and R.B. Schnabel,Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Prentice-Hall, Englewood Cliffs, NJ, 1983).

    Google Scholar 

  4. S.P. Dirkse and M.C. Ferris, “The path solver: A non-monotone stabilization scheme for mixed complementarity problems,” Technical Report 1179, Computer Sciences Department, University of Wisconsin, Madison, WI, 1993.

    Google Scholar 

  5. A.J. Hoffman, “On approximate solutions of systems of linear inequalities,”Journal of Research of the National Bureau of Standards 49 (1952) 263–265.

    Google Scholar 

  6. M. Kojima, S. Mizuno and A. Yoshise, “A polynomial-time algorithm for a class of linear complementarity problems,”Mathematical Programming 44 (1) (1989) 1–26.

    Google Scholar 

  7. K. Madsen and H.B. Nielsen, “A finite smoothing algorithm for linearl 1 estimation,”SIAM Journal on Optimization 3 (2) (1993) 223–235.

    Google Scholar 

  8. O.L. Mangasarian, “A condition number for linear inequalities and linear programs,” in: G. Bamberg and O. Opitz, eds.,Proceedings of 6. Symposium über Operations Research, Augsburg, 1981 (Atheneum/Hain/Scriptor/Hanstein, Königstein, 1981) pp. 3–15.

    Google Scholar 

  9. O.L. Mangasarian, “A condition number for differentiable convex inequalities,”Mathematics of Operations Research 10 (2) (1985) 175–179.

    Google Scholar 

  10. O.L. Mangasarian, “Mathematical programming in neural networks,”ORSA Journal on Computing 5 (4) (1993) 349–360.

    Google Scholar 

  11. O.L. Mangasarian, “Error bounds for inconsistent linear inequalities and programs,”Operations Research Letters 15 (1994) 187–192.

    Google Scholar 

  12. O.L. Mangasarian and J. Ren, “New improved error bounds for the linear complementarity problem,”Mathematical Programming 66 (2) (1994) 241–255.

    Google Scholar 

  13. T.S. Motzkin and I.J. Schoenberg, “The relaxation method for linear inequalities,”Canadian Journal of Mathematics 6 (1954) 393–404.

    Google Scholar 

  14. B.A. Murtagh and M.A. Saunders, “MINOS 5.0 user's guide,” Technical Report SOL 83.20, Stanford University, Stanford, CA, 1983.

    Google Scholar 

  15. S.G. Nash, “Newton-type minimization via the Lanczos method,”SIAM Journal of Numerical Analysis 21 (1984) 770–778.

    Google Scholar 

  16. J. Nocedal, “Theory of algorithms for unconstrained optimization,”Acta Numerica (1992) 199–242.

  17. M.C. Pinar and S.A. Zenios, “On smoothing exact penalty functions for convex constrained optimization,”SIAM Journal on Optimization 4 (1994) 486–511.

    Google Scholar 

  18. J. Ren, “Computable error bounds in mathematical programming,” Technical Report 1173, Computer Sciences Department, University of Wisconsin, Madison, WI, 1993.

    Google Scholar 

  19. S.M. Robinson, “Application of error bounds for convex programming in a linear space,”SIAM Journal on Control and Optimization 13 (1975) 271–273.

    Google Scholar 

  20. R.T. Rockafellar,Convex Analysis (Princeton University Press, Princeton, NJ, 1970).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

This material is based on research supported by Air Force Office of Scientific Research Grant F49620-94-1-0036 and National Science Foundation Grants CCR-9101801 and CCR-9322479.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Chen, C., Mangasarian, O.L. Smoothing methods for convex inequalities and linear complementarity problems. Mathematical Programming 71, 51–69 (1995). https://doi.org/10.1007/BF01592244

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01592244

Keywords

Navigation