Skip to main content
Log in

Parallel training models of deep belief network using MapReduce for the classifications of emotions

  • Original article
  • Published:
International Journal of System Assurance Engineering and Management Aims and scope Submit manuscript

Abstract

In this paper we present two parallel models for the training of the deep belief networks (DBNs) based on the map-reduce framework. In both models we used more than one computer for the training of DBNs in layer by layer manner following the positive and the negative phase. It is well know that training of DBNs requires large amount of time so with the help proposed models computation can be performed in lesser amount of time with respect to the standalone DBN. By performing experiments on Ryerson Audio-Visual Database of Emotional Speech and Song and Toronto Emotional Speech Set data set it has been observed that the first proposed model i.e. First parallel map-reduced based deep belief network (FParMRBDBN) is shown significantly improvement in the computation time. While the second proposed model i.e. second parallel map-reduced based deep belief network (FParMRBDBN) is shown significantly accelerates the training speed of DBNs. Moreover, both proposed models have given significant results in class classification of the emotions as well.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availability

The data that support the findings of this study are available with the corresponding author, upon reasonable request.

References

  • Agarwal G, Om H (2020) Performance of deer hunting optimization based deep learning algorithm for speech emotion recognition. Int J Multimed Tools Appl 2020:1

    Google Scholar 

  • Agarwal G, Om H (2021) An efficient supervised framework for music mood recognition using autoencoder-based optimised support vector regression model. IET Signal Proc. https://doi.org/10.1049/sil2.12015

    Article  Google Scholar 

  • Ashlesha S, Tugnayat RM (2018) A review of Hadoop Ecosystem for Bigdata. Int J Comput Appl 180(14):1

  • Bengio Y (2009) Learning deep architectures for AI. Found Trends Mach Learn 2(1):1–127

    Article  MathSciNet  Google Scholar 

  • Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828

    Article  Google Scholar 

  • Chellapilla K, Puri S, Simard P (2006) High performance convolution neural networks for document processing. In: 10th international workshop on frontiers in handwriting recognition, Suvisoft

  • Dahl GE, Yu D, Deng L, Acero A (2012) Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans Audio Speech Lang Process 20(1):30–42

    Article  Google Scholar 

  • Dan CC, Meier U, Gambardella LM, Schmidhuber J (2010) Deep big simple neural nets excel on handwritten digit recognition. Corr 22(12):3207–3220

    Google Scholar 

  • Gong T (2021) Deep belief network-based multifeature fusion music classification algorithm and simulation. Complexity 2021, Article ID 8861896, 2021. https://doi.org/10.1155/2021/8861896

  • He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: The IEEE conference on computer vision and pattern recognition (CVPR)

  • Hinton G, Deng L, Yu D, Dahl GE, Mohamed A, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Saiainath TN (2012) Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Process Mag 29(6):82–97

    Article  Google Scholar 

  • Huqqani AA, Schikuta E, Mann E (2014) Parallelized neural networks as a service. In: Proceedings of the international joint conference on neural networks (IJCNN ’14), pp 2282–2289

  • Le QV, Ngiam J, Coates A, Lahiri A, Prochnow B, Ng AY (2011) On optimization methods for deep learning. In: International conference on machine learning, pp 67–05

  • Livingstone SR, Russo FA (2018) The ryerson audio-visual d/b of emotional speech and song (RAVDESS): a dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE 13(5):1

    Article  Google Scholar 

  • Long LN, Gupta A (2008) Scalable massively parallel artificial neural networks. J Aerosp Comput Inf Commun 5(1):3–15

    Article  Google Scholar 

  • M.H. Hagan, H.B. Demuth, M.H. Beale (1996) Neural Network Design, PWS Publishing

  • Meng X, Bradley J, Yavuz B, Sparks E, Venkataraman S, Liu D, FreemanJ TsaiD, AmdeM OwenS, XinD XinR, Franklin MJ, Zadeh R, Zaharia M, Talwalkar A (2016) MLlib: machine learning in Apache Spark. J Mach Learn Res 17(1):1235–1241

    MathSciNet  MATH  Google Scholar 

  • Message Passing Interface (2015) http://www.mcs.anl.gov/research/projects/mpi/

  • Mohamed A, Dahl G, Hinton G (2009) Deep belief networks for phone recognition. In: Nips workshop on deep learning for speech recognition and related applications, Vancouver, Canada, vol 1, p 39

  • Networked European Software and Services Initiative (NESSI) (2012) Big data, a new world of opportunities. Networked European Software and Services Initiative (NESSI) White Paper, 2012, http://www.nessi-europe.com/Files/Private/NESSI WhitePaper BigData.pdf

  • Oh KS, Jung K (2004) GPU implementation of neural networks. Pattern Recogn 37(6):1311–1314

    Article  Google Scholar 

  • Ouyang W, Zeng X, Wang X, Qiu S, Luo P, Tian Y, Li H, Yang S, Wang Z, Li H, Wang K, Yan J, Loy CC, Tang X (2017) DeepID-Net: object detection with deformable part based convolution neural networks. IEEE Trans Pattern Anal Mach Intell 39(7):1320–1334

    Article  Google Scholar 

  • R. Gu, F. Shen, Y. Huang (2013) Aparallel computing platform for training large scale neural networks. In: Proceedings of the IEEE International Conference on Big Data, pp 376–384

  • Ren S, He K, Girshick R, Sun J (2017) Faster R-CNN: towards real time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149

    Article  Google Scholar 

  • Senger SS, Mukhopadhyay S (2019) Moving object detection using statistical background subtraction in wavelet compressed domain. In: Multimedia tools and applications. https://doi.org/10.1007/s11042-019-08506-z

  • Shi G, Zhang J, Zhand C, Hu J (2020) A distributed parallel training method of deep belief networks. Soft Comput. https://doi.org/10.1007/s00500-020-04754-6.

  • Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556

  • Szegedy C, Liu W, Jia Y, Sermanet P (2015) Going deeper with convolutions. In: IEEE conference on computer vision and pattern recognition, pp 1–9

  • V. Kumar, A. Grama, A. Gupta, G. Karypis (2002) Introduction to Parallel Computing, Benjamin Cummings/Addison Wesley, San Francisco, Calif, USA

  • Wei J, He J, Chen K, Zhou Y, Tang Z (2017) Collaborative filtering and deep learning based recommendation system for cold start items. Expert Syst Appl 69:29–39

    Article  Google Scholar 

  • Y. Liu, J. Yang, Y. Huang, L. Xu, S. Li, M. Qi (2015) Map reduced based parallel neural networks in enabling large scale machine learning. Comput Intell Neuro Sci

  • Zhao L et al (2018) Parallel computing method of deep belief networks and its application to traffic flow prediction. Knowl-Based Syst. https://doi.org/10.1016/j.knosys.2018.10.025

    Article  Google Scholar 

  • Zikopoulos PC, Eaton C, deRoos D, Deutsch T, Lapis G (2012) Understanding Big Data. McGraw-Hill, Analytics for Enterprise Class Hadoop and Streaming Data

    Google Scholar 

Download references

Funding

Not Applicable.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gaurav Agarwal.

Ethics declarations

Conflicts of interest

Gaurav Agarwal and Hari OM declare that they have no conflict of interest.

Consent to participate

Due acknowledgement/citation have been given to all.

Code availability

The code that support the findings of this study are available with the corresponding author, upon reasonable request.

Ethics approval

This chapter does not contain any studies with human participants/animals performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Agarwal, G., Om, H. Parallel training models of deep belief network using MapReduce for the classifications of emotions. Int J Syst Assur Eng Manag 13 (Suppl 2), 925–940 (2022). https://doi.org/10.1007/s13198-021-01394-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13198-021-01394-3

Keywords

Navigation