Skip to main content
Log in

A fast dual proximal-gradient method for separable convex optimization with linear coupled constraints

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

In this paper we consider a class of separable convex optimization problems with linear coupled constraints arising in many applications. Based on Nesterov’s smoothing technique and a fast gradient scheme, we present a fast dual proximal-gradient method to solve this class of problems. Under some conditions, we then give the convergence analysis of the proposed method and show that the computational complexity bound of the method for achieving an \(\varepsilon \)-optimal feasible solution is \(\mathscr {O}(1/\varepsilon )\). To further accelerate the proposed algorithm, we utilize a restart technique by successively decreasing the smoothing parameter. The advantage of our algorithms allows us to obtain the dual and primal approximate solutions simultaneously, which is fast and can be implemented in a parallel fashion. Several numerical experiments are presented to illustrate the effectiveness of the proposed algorithms. The numerical results validate the efficiency of our methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Aybat, N.S., Iyengar, G.: A first-order smoothed penalty method for compressed sensing. SIAM J. Optim. 21, 287–313 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  2. Aybat N.S., Wang Z.: A parallel method for large scale convex regression problems. In: 53rd IEEE Conference on Decision and Control, pp. 5710-5717. (2014). doi:10.1109/CDC.2014.7040283

  3. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  4. Becker, S., Bobin, J., Candès, E.J.: NESTA: a fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 4, 1–39 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation, Numerical Methods. Prentice-Hall, Englewood Cliffs (1989)

    MATH  Google Scholar 

  6. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends\(@\) in. Mach. Learn. 3, 1–122 (2011)

    Article  MATH  Google Scholar 

  7. Cai, X.J., Gu, G.Y., He, B.S.: On the \(O(1/t)\) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators. Comput. Optim. Appl. 57, 339–363 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  8. Chen, G., Teboulle, M.: A proximal-based decomposition method for convex minimization problems. Math. Program. 64, 81–101 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  9. Connejo, A.J., Mínguez, R., Castillo, E., García-Bertrand, R.: Decomposition Techniques in Mathematical Programming: Engineering and Science Applications. Springer, Berlin (2006)

    Google Scholar 

  10. Devolder, O., Glineur, F., Nesterov, Y.: Double smoothing technique for large-scale linearly constrained convex optimization. SIAM J. Optim. 22, 702–727 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  11. Dinh, Q.T., Savorgnan, C., Diehl, M.: Combining lagrangian decomposition and excessive gap smoothing technique for solving large-scale separable convex optimization problems. Comput. Optim. Appl. 55(1), 75–111 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  12. Gilpin, A., Pena, J., Sandholm, T.: First-order algorithm with \(O(\ln (1/\varepsilon ))\) convergence for \(\varepsilon \)-equilibrium. Math. Program. 133, 279–298 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  13. Goldfarb, D., Ma, S.: Fast multiple splitting algorithms for convex optimization. SIAM J. Optim. 22, 533–556 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  14. He, N., Juditsky, A., Nemirovski, A.: Mirror Prox algorithm for multi-term composite minimization and semi-separable problems. Comput. Optim. Appl. (2015). doi:10.1007/s10589-014-9723-3

    MathSciNet  MATH  Google Scholar 

  15. He, B.S., Tao, M., Yuan, X.M.: Alternating direction method with Gaussian back substitution for separable convex programming. SIAM J. Optim. 22, 313–340 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  16. Li, J., Wu, C., Wu, Z., Long, Q., Wang, X.: Distributed proximal-gradient method for convex optimization with inequality constraints. J. ANZIAM 56, 160–178 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  17. Li, J., Wu, C., Wu, Z., Long, Q.: Gradient-free method for nonsmooth distributed optimization. J. Global Optim. 61, 325–340 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  18. Li, J., Wu, Z., Wu, C., Long, Q., Wang, X.: An inexact dual fast gradient-projection method for separable convex optimization with linear coupled constraints. J. Optim. Theory Appl. (2015). doi:10.1007/s10957-015-0757-1

    MathSciNet  MATH  Google Scholar 

  19. Low, S.H., Lapsley, D.E.: Optimization flow control. I. Basic algorithm and convergence. IEEE/ACM Trans. Netw. 7, 861–874 (1999)

    Article  Google Scholar 

  20. Necoara, I., Suykens, J.: Application of a smoothing technique to decomposition in convex optimization. IEEE Trans. Autom. Control 53, 2674–2679 (2008)

    Article  MathSciNet  Google Scholar 

  21. Necoara, I., Suykens, J.: Interior-point lagrangian decomposition method for separable convex optimization. J. Optim. Theory Appl. 143(3), 567–588 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  22. Necoara, I., Nedelcu, V., Dumitrache, I.: Parallel and distributed optimization methods for estimation and control in networks. J. Process Control 21(5), 756–766 (2011)

    Article  Google Scholar 

  23. Nedic, A., Ozdaglar, A.: Approximate primal solutions and rate analysis for dual subgradient methods. SIAM J. Optim. 19, 1757–1780 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  24. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, Boston (2004)

    Book  MATH  Google Scholar 

  25. Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103, 127–152 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  26. Oliveira, L.B., Camponogara, E.: Multi-agent model predictive control of signaling split in urban traffic networks. Transp. Res. Part C 18, 120–139 (2010)

    Article  Google Scholar 

  27. Patrinos, P., Bemporad, A.: An accelerated dual gradient-projection algorithm for embedded linear model predictive control. IEEE Trans. Autom. Control 59(1), 18–33 (2014)

    Article  MathSciNet  Google Scholar 

  28. Rosa, C.H., Ruszczyński, A.: On augmented Lagrangian decomposition methods for multistage stochastic programs. Ann. Oper. Res. 64, 289–309 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  29. Samadi, P., Mohsenian-Rad, A., Schober, R., Wong, V.W.S., Jatskevich, J.: Optimal real-time pricing algorithm based on utility maximization for smart grid. In: Proceedings of IEEE International Conference on Smart Grid Communications, pp. 415–420 (2010)

  30. Samadi, P., Mohsenian-Rad, A., Schober, R., Wong, V.W.S.: Advanced demand side management for the future smart grid using mechanism design. IEEE Trans. Smart Grid 3, 1170–1180 (2012)

    Article  Google Scholar 

  31. Tseng, P.: On accelerated proximal gradient methods for convex-concave optimization. Department of Mathematics, University of Washington, Seattle, WA 98195, USA (2008)

  32. Wang, X., Hong, M., Ma, S., Luo, Z.: Solving multiple-block separable convex minimization problems using two-block alternating direction method of multipliers. http://arxiv.org/pdf/1308.5294v1 (2013)

  33. Yang, J., Zhang, Y.: Alternating direction algorithms for \(\ell _1\)-problems in compressive sensing. SIAM J. Sci. Comput. 33, 250–278 (2011)

    Article  MathSciNet  Google Scholar 

  34. Zhao, G.: A Lagrangian dual method with self-concordant barriers for multistage stochastic convex programming. Math. Program. 102, 1–24 (2005)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

We are grateful to the anonymous referees and editor for their useful comments, which have made the paper clearer and more comprehensive than the earlier version. This work was partially supported by the Chinese Scholarship Council Grant (201508500038), the Natural Science Foundation of China (11471062, 11501070 and 71501017), the Natural Science Foundation of Chongqing (cstc2013jjB00001 and cstc2015jcyjA0382), the Scientific and Technological Research Program of Chongqing Municipal Education Commission (KJ1500301) and the Chongqing Normal University Research Foundation (15XLB005).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiyou Wu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, J., Chen, G., Dong, Z. et al. A fast dual proximal-gradient method for separable convex optimization with linear coupled constraints. Comput Optim Appl 64, 671–697 (2016). https://doi.org/10.1007/s10589-016-9826-0

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-016-9826-0

Keywords

Navigation