Skip to main content
Top

2018 | OriginalPaper | Chapter

Fast Communication Structure for Asynchronous Distributed ADMM Under Unbalance Process Arrival Pattern

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The alternating direction method of multipliers (ADMM) is an algorithm for solving large-scale data optimization problems in machine learning. In order to reduce the communication delay in a distributed environment, asynchronous distributed ADMM (AD-ADMM) was proposed. However, due to the unbalance process arrival pattern existing in the multiprocessor cluster, the communication of the star structure used in AD-ADMM is inefficient. Moreover, the load in the entire cluster is unbalanced, resulting in a decrease of the data processing capacity. This paper proposes a hierarchical parameter server communication structure (HPS) and an asynchronous distributed ADMM (HAD-ADMM). The algorithm mitigates the unbalanced arrival problem through process grouping and scattered updating global variable, which basically achieves load balancing. Experiments show that the HAD-ADMM is highly efficient in a large-scale distributed environment and has no significant impact on convergence.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Chen, Q., Cao, F.: Distributed support vector machine in master–slave mode. Neural Netw. Off. J. Int. Neural Netw. Soc. 101, 94 (2018)CrossRef Chen, Q., Cao, F.: Distributed support vector machine in master–slave mode. Neural Netw. Off. J. Int. Neural Netw. Soc. 101, 94 (2018)CrossRef
2.
go back to reference Taylor, G., Burmeister, R., Xu, Z., et al.: Training neural networks without gradients: a scalable ADMM approach. In: International Conference on International Conference on Machine Learning, pp. 2722–2731. JMLR.org (2016) Taylor, G., Burmeister, R., Xu, Z., et al.: Training neural networks without gradients: a scalable ADMM approach. In: International Conference on International Conference on Machine Learning, pp. 2722–2731. JMLR.org (2016)
4.
go back to reference Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math Appl. 2(1), 17–40 (1976)CrossRef Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math Appl. 2(1), 17–40 (1976)CrossRef
5.
go back to reference Boyd, S., Parikh, N., Chu, E., et al.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2010)CrossRef Boyd, S., Parikh, N., Chu, E., et al.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2010)CrossRef
6.
go back to reference Lin, T., Ma, S., Zhang, S.: On the global linear convergence of the ADMM with multi-block variables. SIAM J. Optim. 25(3), 1478–1497 (2014)CrossRef Lin, T., Ma, S., Zhang, S.: On the global linear convergence of the ADMM with multi-block variables. SIAM J. Optim. 25(3), 1478–1497 (2014)CrossRef
7.
go back to reference Wang, Y., Yin, W., Zeng, J.: Global convergence of ADMM in nonconvex nonsmooth optimization. J. Sci. Comput., 1–35 (2018) Wang, Y., Yin, W., Zeng, J.: Global convergence of ADMM in nonconvex nonsmooth optimization. J. Sci. Comput., 1–35 (2018)
8.
go back to reference Zhang, R., Kwok, J.T.: Asynchronous distributed ADMM for consensus optimization. In: International Conference on Machine Learning, pp. II-1701. JMLR.org (2014) Zhang, R., Kwok, J.T.: Asynchronous distributed ADMM for consensus optimization. In: International Conference on Machine Learning, pp. II-1701. JMLR.org (2014)
9.
go back to reference Chang, T.H., Hong, M., Liao, W.C., et al.: Asynchronous distributed alternating direction method of multipliers: algorithm and convergence analysis. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4781–4785. IEEE (2016) Chang, T.H., Hong, M., Liao, W.C., et al.: Asynchronous distributed alternating direction method of multipliers: algorithm and convergence analysis. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4781–4785. IEEE (2016)
10.
go back to reference Chang, T.H., Liao, W.C., Hong, M., et al.: Asynchronous distributed ADMM for large-scale optimization—Part II: linear convergence analysis and numerical performance. IEEE Trans. Signal Process. 64(12), 3131–3144 (2016)MathSciNetCrossRef Chang, T.H., Liao, W.C., Hong, M., et al.: Asynchronous distributed ADMM for large-scale optimization—Part II: linear convergence analysis and numerical performance. IEEE Trans. Signal Process. 64(12), 3131–3144 (2016)MathSciNetCrossRef
11.
go back to reference Faraj, A., Patarasuk, P., Yuan, X.: A study of process arrival patterns for MPI collective operations. In: International Conference on Supercomputing, pp. 168–179. ACM (2007) Faraj, A., Patarasuk, P., Yuan, X.: A study of process arrival patterns for MPI collective operations. In: International Conference on Supercomputing, pp. 168–179. ACM (2007)
12.
go back to reference Patarasuk, P., Yuan, X.: Efficient MPI Bcast across different process arrival patterns. In: IEEE International Symposium on Parallel and Distributed Processing, pp. 1–11. IEEE (2009) Patarasuk, P., Yuan, X.: Efficient MPI Bcast across different process arrival patterns. In: IEEE International Symposium on Parallel and Distributed Processing, pp. 1–11. IEEE (2009)
13.
go back to reference Qian, Y., Afsahi, A.: Process arrival pattern aware alltoall and allgather on InfiniBand clusters. Int. J. Parallel Program. 39(4), 473–493 (2011)CrossRef Qian, Y., Afsahi, A.: Process arrival pattern aware alltoall and allgather on InfiniBand clusters. Int. J. Parallel Program. 39(4), 473–493 (2011)CrossRef
14.
go back to reference Tipparaju, V., Nieplocha, J., Panda, D.: Fast collective operations using shared and remote memory access protocols on clusters. In: International Parallel & Distributed Processing Symposium, p. 84a (2003) Tipparaju, V., Nieplocha, J., Panda, D.: Fast collective operations using shared and remote memory access protocols on clusters. In: International Parallel & Distributed Processing Symposium, p. 84a (2003)
15.
go back to reference Liu, Z.Q., Song, J.Q., Lu, F.S., et al.: Optimizing method for improving the performance of MPI broadcast under unbalanced process arrival patterns. J. Softw. 22(10), 2509–2522 (2011)CrossRef Liu, Z.Q., Song, J.Q., Lu, F.S., et al.: Optimizing method for improving the performance of MPI broadcast under unbalanced process arrival patterns. J. Softw. 22(10), 2509–2522 (2011)CrossRef
16.
go back to reference Smola, A., Narayanamurthy, S.: An architecture for parallel topic models. VLDB Endow. 3, 703–710 (2010)CrossRef Smola, A., Narayanamurthy, S.: An architecture for parallel topic models. VLDB Endow. 3, 703–710 (2010)CrossRef
17.
go back to reference Xing, E.P., Ho, Q., Dai, W., et al.: Petuum: a new platform for distributed machine learning on big data. In: ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1335–1344. IEEE (2015)CrossRef Xing, E.P., Ho, Q., Dai, W., et al.: Petuum: a new platform for distributed machine learning on big data. In: ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1335–1344. IEEE (2015)CrossRef
18.
go back to reference Li, M., Zhou, L., Yang, Z., Li, A., Xia, F.: Parameter server for distributed machine learning. In: Big Learning Workshop, pp. 1–10 (2013) Li, M., Zhou, L., Yang, Z., Li, A., Xia, F.: Parameter server for distributed machine learning. In: Big Learning Workshop, pp. 1–10 (2013)
Metadata
Title
Fast Communication Structure for Asynchronous Distributed ADMM Under Unbalance Process Arrival Pattern
Authors
Shuqing Wang
Yongmei Lei
Copyright Year
2018
DOI
https://doi.org/10.1007/978-3-030-01418-6_36

Premium Partner