Skip to main content

2020 | OriginalPaper | Buchkapitel

6. Accelerated Parallel Algorithms

verfasst von : Zhouchen Lin, Huan Li, Cong Fang

Erschienen in: Accelerated Optimization for Machine Learning

Verlag: Springer Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This chapter reviews the accelerated parallel algorithms. We first introduce the accelerated asynchronous algorithms, including accelerated asynchronous gradient descent and accelerated asynchronous coordinate descent. Then we concentrate on the accelerated distributed algorithms, categorized into the centralized topology and decentralized topology. For both topologies, we introduce the communication-efficient accelerated stochastic dual coordinate ascent. Specially, we concentrate on the stochastic variant, where at each iteration only parts of samples are used in each agent.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
This assumption is used only for simplifying the proof.
 
Literatur
1.
Zurück zum Zitat A. Agarwal, J.C. Duchi, Distributed delayed stochastic optimization, in Advances in Neural Information Processing Systems, Granada, vol. 24 (2011), pp. 873–881 A. Agarwal, J.C. Duchi, Distributed delayed stochastic optimization, in Advances in Neural Information Processing Systems, Granada, vol. 24 (2011), pp. 873–881
2.
Zurück zum Zitat Z. Allen-Zhu, Katyusha: the first truly accelerated stochastic gradient method, in Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, Montreal, (2017), pp. 1200–1206 Z. Allen-Zhu, Katyusha: the first truly accelerated stochastic gradient method, in Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, Montreal, (2017), pp. 1200–1206
3.
Zurück zum Zitat Z. Allen-Zhu, Z. Qu, P. Richtárik, Y. Yuan, Even faster accelerated coordinate descent using non-uniform sampling, in Proceedings of the 33th International Conference on Machine Learning, New York, (2016), pp. 1110–1119 Z. Allen-Zhu, Z. Qu, P. Richtárik, Y. Yuan, Even faster accelerated coordinate descent using non-uniform sampling, in Proceedings of the 33th International Conference on Machine Learning, New York, (2016), pp. 1110–1119
4.
Zurück zum Zitat C. Fang, Z. Lin, Parallel asynchronous stochastic variance reduction for nonconvex optimization, in Proceedings of the 31th AAAI Conference on Artificial Intelligence, San Francisco, (2017), pp. 794–800 C. Fang, Z. Lin, Parallel asynchronous stochastic variance reduction for nonconvex optimization, in Proceedings of the 31th AAAI Conference on Artificial Intelligence, San Francisco, (2017), pp. 794–800
5.
Zurück zum Zitat C. Fang, Y. Huang, Z. Lin, Accelerating asynchronous algorithms for convex optimization by momentum compensation (2018). Preprint. arXiv:1802.09747 C. Fang, Y. Huang, Z. Lin, Accelerating asynchronous algorithms for convex optimization by momentum compensation (2018). Preprint. arXiv:1802.09747
6.
Zurück zum Zitat D. Jakovetić, J.M. Moura, J. Xavier, Linear convergence rate of a class of distributed augmented Lagrangian algorithms. IEEE Trans. Automat. Contr. 60(4), 922–936 (2014)MathSciNetMATH D. Jakovetić, J.M. Moura, J. Xavier, Linear convergence rate of a class of distributed augmented Lagrangian algorithms. IEEE Trans. Automat. Contr. 60(4), 922–936 (2014)MathSciNetMATH
7.
Zurück zum Zitat Q. Lin, Z. Lu, L. Xiao, An accelerated proximal coordinate gradient method, in Advances in Neural Information Processing Systems, Montreal, vol. 27 (2014), pp. 3059–3067 Q. Lin, Z. Lu, L. Xiao, An accelerated proximal coordinate gradient method, in Advances in Neural Information Processing Systems, Montreal, vol. 27 (2014), pp. 3059–3067
8.
Zurück zum Zitat J. Liu, S.J. Wright, C. Ré, V. Bittorf, S. Sridhar, An asynchronous parallel stochastic coordinate descent algorithm. J. Mach. Learn. Res. 16(1), 285–322 (2015)MathSciNetMATH J. Liu, S.J. Wright, C. Ré, V. Bittorf, S. Sridhar, An asynchronous parallel stochastic coordinate descent algorithm. J. Mach. Learn. Res. 16(1), 285–322 (2015)MathSciNetMATH
9.
Zurück zum Zitat C. Ma, V. Smith, M. Jaggi, M.I. Jordan, P. Richtarik, M. Takac, Adding vs. averaging in distributed primal-dual optimization, arXiv preprint, arXiv:1502.03508 (2015) C. Ma, V. Smith, M. Jaggi, M.I. Jordan, P. Richtarik, M. Takac, Adding vs. averaging in distributed primal-dual optimization, arXiv preprint, arXiv:1502.03508 (2015)
10.
Zurück zum Zitat H. Mania, X. Pan, D. Papailiopoulos, B. Recht, K. Ramchandran, M.I. Jordan, Perturbed iterate analysis for asynchronous stochastic optimization. SIAM J. Optim. 27(4), 2202–2229 (2017)MathSciNetMATH H. Mania, X. Pan, D. Papailiopoulos, B. Recht, K. Ramchandran, M.I. Jordan, Perturbed iterate analysis for asynchronous stochastic optimization. SIAM J. Optim. 27(4), 2202–2229 (2017)MathSciNetMATH
11.
Zurück zum Zitat Y. Nesterov, A method for unconstrained convex minimization problem with the rate of convergence O(1∕k 2). Sov. Math. Dokl. 27(2), 372–376 (1983)MATH Y. Nesterov, A method for unconstrained convex minimization problem with the rate of convergence O(1∕k 2). Sov. Math. Dokl. 27(2), 372–376 (1983)MATH
12.
Zurück zum Zitat B. Recht, C. Re, S. Wright, F. Niu, HOGWILD!: a lock-free approach to parallelizing stochastic gradient descent, in Advances in Neural Information Processing Systems, Granada, vol. 24 (2011), pp. 693–701 B. Recht, C. Re, S. Wright, F. Niu, HOGWILD!: a lock-free approach to parallelizing stochastic gradient descent, in Advances in Neural Information Processing Systems, Granada, vol. 24 (2011), pp. 693–701
13.
Zurück zum Zitat S.J. Reddi, A. Hefny, S. Sra, B. Poczos, A.J. Smola, On variance reduction in stochastic gradient descent and its asynchronous variants, in Advances in Neural Information Processing Systems, Montreal, vol. 28 (2015), pp. 2647–2655 S.J. Reddi, A. Hefny, S. Sra, B. Poczos, A.J. Smola, On variance reduction in stochastic gradient descent and its asynchronous variants, in Advances in Neural Information Processing Systems, Montreal, vol. 28 (2015), pp. 2647–2655
14.
Zurück zum Zitat K. Seaman, F. Bach, S. Bubeck, Y.T. Lee, L. Massoulié, Optimal algorithms for smooth and strongly convex distributed optimization in networks, in Proceedings of the 34th International Conference on Machine Learning, Sydney, (2017), pp. 3027–3036 K. Seaman, F. Bach, S. Bubeck, Y.T. Lee, L. Massoulié, Optimal algorithms for smooth and strongly convex distributed optimization in networks, in Proceedings of the 34th International Conference on Machine Learning, Sydney, (2017), pp. 3027–3036
15.
Zurück zum Zitat W. Shi, Q. Ling, G. Wu, W. Yin, EXTRA: an exact first-order algorithm for decentralized consensus optimization. SIAM J. Optim. 25(2), 944–966 (2015)MathSciNetMATH W. Shi, Q. Ling, G. Wu, W. Yin, EXTRA: an exact first-order algorithm for decentralized consensus optimization. SIAM J. Optim. 25(2), 944–966 (2015)MathSciNetMATH
16.
Zurück zum Zitat K. Yuan, Q. Ling, W. Yin, On the convergence of decentralized gradient descent. SIAM J. Optim. 26(3), 1835–1854 (2016)MathSciNetMATH K. Yuan, Q. Ling, W. Yin, On the convergence of decentralized gradient descent. SIAM J. Optim. 26(3), 1835–1854 (2016)MathSciNetMATH
17.
Zurück zum Zitat P. Zhao, T. Zhang, Stochastic optimization with importance sampling for regularized loss minimization, in Proceedings of the 32th International Conference on Machine Learning, Lille, (2015), pp. 1–9 P. Zhao, T. Zhang, Stochastic optimization with importance sampling for regularized loss minimization, in Proceedings of the 32th International Conference on Machine Learning, Lille, (2015), pp. 1–9
18.
Zurück zum Zitat S. Zheng, J. Wang, F. Xia, W. Xu, T. Zhang, A general distributed dual coordinate optimization framework for regularized loss minimization. J. Mach. Learn. Res. 18(115), 1–52 (2017)MathSciNetMATH S. Zheng, J. Wang, F. Xia, W. Xu, T. Zhang, A general distributed dual coordinate optimization framework for regularized loss minimization. J. Mach. Learn. Res. 18(115), 1–52 (2017)MathSciNetMATH
Metadaten
Titel
Accelerated Parallel Algorithms
verfasst von
Zhouchen Lin
Huan Li
Cong Fang
Copyright-Jahr
2020
Verlag
Springer Singapore
DOI
https://doi.org/10.1007/978-981-15-2910-8_6

Premium Partner