Skip to main content
Top

2018 | OriginalPaper | Chapter

8. Parallel Computing Considerations

Authors : Jun Zhao, Wei Wang, Chunyang Sheng

Published in: Data-Driven Prediction for Industrial Processes and Their Applications

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This chapter discusses the computational cost of machine learning model. To reduce its training time is a requisite of its industrial applications since a production process usually requires real-time responses. The commonly used method to accelerate the training process is to develop a parallel computing framework. In literature, two kinds of popular methods speeding up the training involves the one with a computer equipped with graphics processor unit (GPU) and the one with computer cluster including a number of computers. This chapter firstly introduces the basic ideas of GPU acceleration (e.g., the compute unified device architecture (CUDA) created by NVIDIA™) and the computer cluster framework (e.g., the MapReduce framework), then gives some specified examples of them. When training an EKF-based Elman network, the inversion operation of a Jacobian matrix is the most time-consuming procedure; a parallel computing strategy for such an operation is therefore proposed by using the CUDA-based GPU acceleration. Besides, with regard to the LSSVM modeling, a CUDA-based parallel PSO is then introduced for its hyper-parameters optimization. As for the computer cluster version, we design a parallelized EKF based on ESN by using MapReduce framework for acceleration. At the end, we also present a series of experimental analysis by using the practical energy data in steel industry to validate the performance of the accelerating approaches.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Locans, U., Adelmann, A., Suter, A., et al. (2017). Real-time computation of parameter fitting and image reconstruction using graphical processing units. Computer Physics Communications, 215, 71–80.MathSciNetCrossRef Locans, U., Adelmann, A., Suter, A., et al. (2017). Real-time computation of parameter fitting and image reconstruction using graphical processing units. Computer Physics Communications, 215, 71–80.MathSciNetCrossRef
4.
go back to reference Ramírez-Gallego, S., Fernández, A., García, S., et al. (2018). Big data: Tutorial and guidelines on information and process fusion for analytics algorithms with MapReduce. Information Fusion, 42, 51–61.CrossRef Ramírez-Gallego, S., Fernández, A., García, S., et al. (2018). Big data: Tutorial and guidelines on information and process fusion for analytics algorithms with MapReduce. Information Fusion, 42, 51–61.CrossRef
5.
go back to reference Zhao, J., Zhu, X., Wang, W., et al. (2013). Extended Kalman filter-based Elman networks for industrial time series prediction with GPU acceleration. Neurocomputing, 118(6), 215–224.CrossRef Zhao, J., Zhu, X., Wang, W., et al. (2013). Extended Kalman filter-based Elman networks for industrial time series prediction with GPU acceleration. Neurocomputing, 118(6), 215–224.CrossRef
6.
go back to reference Heeswijk, M. V., Miche, Y., Oja, E., & Lendasse, A. (2011). GPU-accelerated and parallelized ELM ensembles for large-scale regression. Neurocomputing, 74, 2430–2437.CrossRef Heeswijk, M. V., Miche, Y., Oja, E., & Lendasse, A. (2011). GPU-accelerated and parallelized ELM ensembles for large-scale regression. Neurocomputing, 74, 2430–2437.CrossRef
7.
go back to reference Zhao, J., Wang, W., Pedrycz, W., et al. (2012). Online parameter optimization-based prediction for converter gas system by parallel strategies. IEEE Transactions on Control Systems Technology, 20(3), 835–845.CrossRef Zhao, J., Wang, W., Pedrycz, W., et al. (2012). Online parameter optimization-based prediction for converter gas system by parallel strategies. IEEE Transactions on Control Systems Technology, 20(3), 835–845.CrossRef
8.
go back to reference Chapelle, O., & Vapnik, V. (2000). Model selection for support vector machines. In Advances in neural information processing systems. Cambridge, MA: MIT Press. Chapelle, O., & Vapnik, V. (2000). Model selection for support vector machines. In Advances in neural information processing systems. Cambridge, MA: MIT Press.
9.
go back to reference Van, G. T., Suykens, J., Baesens, B., et al. (2004). Benchmarking least squares support vector machine classifiers. Machine Learning, 54(1), 5–32.CrossRef Van, G. T., Suykens, J., Baesens, B., et al. (2004). Benchmarking least squares support vector machine classifiers. Machine Learning, 54(1), 5–32.CrossRef
10.
go back to reference An, S., Liu, W., & Venkatesh, S. (2007). Fast cross-validation algorithms for least squares support vector machine and kernel ridge regression. Pattern Recognition, 40, 2154–2162.CrossRef An, S., Liu, W., & Venkatesh, S. (2007). Fast cross-validation algorithms for least squares support vector machine and kernel ridge regression. Pattern Recognition, 40, 2154–2162.CrossRef
11.
go back to reference Scholkopf, B., & Smola, A. J. (2002). Learning with kernels: Support vector machines, regularization, optimization, and beyond. Cambridge, MA: MIT Press. Scholkopf, B., & Smola, A. J. (2002). Learning with kernels: Support vector machines, regularization, optimization, and beyond. Cambridge, MA: MIT Press.
12.
go back to reference Sheng, C., Zhao, J., Leung, H, et al. (2013). Extended Kalman filter based echo state network for time series prediction using MapReduce framework. In IEEE Ninth International Conference on Mobile Ad-Hoc and Sensor Networks (pp. 175–180). IEEE. Sheng, C., Zhao, J., Leung, H, et al. (2013). Extended Kalman filter based echo state network for time series prediction using MapReduce framework. In IEEE Ninth International Conference on Mobile Ad-Hoc and Sensor Networks (pp. 175–180). IEEE.
Metadata
Title
Parallel Computing Considerations
Authors
Jun Zhao
Wei Wang
Chunyang Sheng
Copyright Year
2018
DOI
https://doi.org/10.1007/978-3-319-94051-9_8

Premium Partner