Skip to main content
Top

2023 | OriginalPaper | Chapter

Dense Video Captioning Using Video-Audio Features and Topic Modeling Based on Caption

Authors : Lakshmi Harika Palivela, S. Swetha, M. Nithish Guhan, M. Prasanna Venkatesh

Published in: Proceedings of Fourth International Conference on Communication, Computing and Electronics Systems

Publisher: Springer Nature Singapore

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Videos are composed of multiple tasks. Dense video captioning entails captioning of different events in the video. A textual description is generated based on visual, speech and audio cues from a video and then topic modeling is performed on the generated caption. Uncertainty modeling technique is applied for finding temporal event proposals where timestamps for each event in the video are produced and also uses Transformer which inputs multi-modal features to identify captions effectively and to make it more precise. Topic modeling tasks include highlighted keywords in the captions generated and topic generation i.e., category under which the whole caption belongs to. The proposed model generates a textual description based on the dynamic and static visual features and audio cues from a video and then topic modeling is performed on the generated caption.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Xiong Y, Dai B, Lin D (2018) Move forward and tell: a progressive generator of video descriptions. In: Proceedings of the European conference on computer vision, pp 468–483 Xiong Y, Dai B, Lin D (2018) Move forward and tell: a progressive generator of video descriptions. In: Proceedings of the European conference on computer vision, pp 468–483
2.
go back to reference Chadha A, Arora G, Kaloty N (2022) iPerceive: applying common-sense reasoning to multi-modal dense video captioning and video question answering, pp 1–13 Chadha A, Arora G, Kaloty N (2022) iPerceive: applying common-sense reasoning to multi-modal dense video captioning and video question answering, pp 1–13
3.
go back to reference Hao W, Zhang Z, Guan H (2018) Integrating both visual and audio cues for enhanced video caption. In: Proceedings of the AAAI conference on artificial intelligence, vol 32, no 1 Hao W, Zhang Z, Guan H (2018) Integrating both visual and audio cues for enhanced video caption. In: Proceedings of the AAAI conference on artificial intelligence, vol 32, no 1
4.
go back to reference Buch S, Escorcia V, Shen C, Ghanem B, Niebles J (2017) SST: single-stream temporal action proposals. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 6373–6382 Buch S, Escorcia V, Shen C, Ghanem B, Niebles J (2017) SST: single-stream temporal action proposals. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 6373–6382
5.
go back to reference Rahman T, Xu B, Sigal L (2019) Watch, listen and tell: multi-modal weakly supervised dense event captioning. In: IEEE/CVF international conference on computer vision (ICCV), pp 8907–8916 Rahman T, Xu B, Sigal L (2019) Watch, listen and tell: multi-modal weakly supervised dense event captioning. In: IEEE/CVF international conference on computer vision (ICCV), pp 8907–8916
6.
go back to reference Hori C, Hori T, Lee T, Zhang Z, Harsham B, Hershey J, Marks T, Sumi K (2017) Attention-based multimodal fusion for video description. In: Proceedings of the IEEE international conference on computer vision, pp 4193–4202 Hori C, Hori T, Lee T, Zhang Z, Harsham B, Hershey J, Marks T, Sumi K (2017) Attention-based multimodal fusion for video description. In: Proceedings of the IEEE international conference on computer vision, pp 4193–4202
7.
go back to reference Li Y, Yao T, Pan Y, Chao H, Mei T (2018) Jointly localizing and describing events for dense video captioning. In: IEEE/CVF conference on computer vision and pattern recognition, pp 7492–7500 Li Y, Yao T, Pan Y, Chao H, Mei T (2018) Jointly localizing and describing events for dense video captioning. In: IEEE/CVF conference on computer vision and pattern recognition, pp 7492–7500
8.
go back to reference Zhao B, Li X, Lu X (2018) Video captioning with tube features. In: Proceedings of the 27th international joint conference on artificial intelligence (IJCAI'18). AAAI Press, pp 1177–1183 Zhao B, Li X, Lu X (2018) Video captioning with tube features. In: Proceedings of the 27th international joint conference on artificial intelligence (IJCAI'18). AAAI Press, pp 1177–1183
9.
go back to reference Baraldi L, Grana C, Cucchiara R (2017) Hierarchical boundary-aware neural encoder for video captioning. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 3185–3194 Baraldi L, Grana C, Cucchiara R (2017) Hierarchical boundary-aware neural encoder for video captioning. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 3185–3194
10.
go back to reference Deng C, Chen S, Chen D, He Y, Wu Q (2021) Sketch, ground, and refine: top-down dense video captioning. In: IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 234–243 Deng C, Chen S, Chen D, He Y, Wu Q (2021) Sketch, ground, and refine: top-down dense video captioning. In: IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 234–243
11.
go back to reference Carreira J, Zisserman A (2017) Quo vadis, action recognition? A new model and the kinetics dataset. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6299–6308 Carreira J, Zisserman A (2017) Quo vadis, action recognition? A new model and the kinetics dataset. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6299–6308
12.
go back to reference Gao L, Lei Y, Zeng P, Song J, Wang M, Shen H (2022) Hierarchical representation network with auxiliary tasks for video captioning and video question answering. IEEE Trans Image Process 31:202–215CrossRef Gao L, Lei Y, Zeng P, Song J, Wang M, Shen H (2022) Hierarchical representation network with auxiliary tasks for video captioning and video question answering. IEEE Trans Image Process 31:202–215CrossRef
13.
go back to reference Zhou L, Zhou Y, Corso J, Socher R, Xiong C (2018) End-to-end dense video captioning with masked transformer. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 8739–8748 Zhou L, Zhou Y, Corso J, Socher R, Xiong C (2018) End-to-end dense video captioning with masked transformer. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 8739–8748
14.
go back to reference Venugopalan S, Xu H, Donahue J, Rohrbach M, Mooney R, Saenko K (2015) Translating videos to natural language using deep recurrent neural networks. In: Proceedings of the 2015 conference of the North American chapter of the association for computational linguistics: human language technologies, pp 1494–1504 Venugopalan S, Xu H, Donahue J, Rohrbach M, Mooney R, Saenko K (2015) Translating videos to natural language using deep recurrent neural networks. In: Proceedings of the 2015 conference of the North American chapter of the association for computational linguistics: human language technologies, pp 1494–1504
15.
go back to reference Donahue J, Hendricks L, Rohrbach M, Venugopalan S, Guadarrama S, Saenko K, Darrell T (2015) Long-term recurrent convolutional networks for visual recognition and description, pp 2625–2634 Donahue J, Hendricks L, Rohrbach M, Venugopalan S, Guadarrama S, Saenko K, Darrell T (2015) Long-term recurrent convolutional networks for visual recognition and description, pp 2625–2634
16.
go back to reference Hershey S, Chaudhuri S, Ellis D, Gemmeke J, Jansen A, Moore C, Plakal M, Platt D, Saurous R, Seybold B (2017) CNN architectures for large-scale audio classification. In: 2017 IEEE international conference on acoustics, speech and signal process, pp 131–135 Hershey S, Chaudhuri S, Ellis D, Gemmeke J, Jansen A, Moore C, Plakal M, Platt D, Saurous R, Seybold B (2017) CNN architectures for large-scale audio classification. In: 2017 IEEE international conference on acoustics, speech and signal process, pp 131–135
17.
go back to reference Lee S, Kim I (2018) Multimodal feature learning for video captioning. Math Probl Eng 1–8 Lee S, Kim I (2018) Multimodal feature learning for video captioning. Math Probl Eng 1–8
19.
go back to reference Lee P, Wang J, Lu Y, Byun H (2021) Weakly-supervised temporal action localization by uncertainty modeling. In: The thirty-fifth AAAI conference on artificial intelligence, pp 1854–1862 Lee P, Wang J, Lu Y, Byun H (2021) Weakly-supervised temporal action localization by uncertainty modeling. In: The thirty-fifth AAAI conference on artificial intelligence, pp 1854–1862
20.
go back to reference Sumalatha MR, Lakshmi Harika P, Aravind J, Dhaarani S, Rajavi P (2018) Detection of static and dynamic abnormal activities in crowded areas using hybrid clustering. In: International conference on ISMAC in computational vision and bio-engineering, pp 815–824 Sumalatha MR, Lakshmi Harika P, Aravind J, Dhaarani S, Rajavi P (2018) Detection of static and dynamic abnormal activities in crowded areas using hybrid clustering. In: International conference on ISMAC in computational vision and bio-engineering, pp 815–824
21.
22.
go back to reference Koshti D, Kamoji S, Kalnad N, Sreekumar S, Bhujbal S (2020) Video anomaly detection using inflated 3D convolution network. In: International conference on inventive computation technologies (ICICT), pp 729–733 Koshti D, Kamoji S, Kalnad N, Sreekumar S, Bhujbal S (2020) Video anomaly detection using inflated 3D convolution network. In: International conference on inventive computation technologies (ICICT), pp 729–733
23.
go back to reference Iashin V, Rahtu E (2020) Multi-modal dense video captioning. In: IEEE/CVF conference on computer vision and pattern recognition (2020) Iashin V, Rahtu E (2020) Multi-modal dense video captioning. In: IEEE/CVF conference on computer vision and pattern recognition (2020)
24.
go back to reference Pennington J, Socher R, Manning C (2014) Glove: global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language process (EMNLP), pp 1532–1543 Pennington J, Socher R, Manning C (2014) Glove: global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language process (EMNLP), pp 1532–1543
25.
26.
go back to reference Grootendorst M (2020) Keybert: minimal keyword extraction with bert Grootendorst M (2020) Keybert: minimal keyword extraction with bert
27.
go back to reference Krishna R, Hata K, Ren F, Fei-Fei L, Niebles J (2017) Dense-captioning events in videos. In: 2017 IEEE international conference on computer vision (ICCV), pp 706–715 Krishna R, Hata K, Ren F, Fei-Fei L, Niebles J (2017) Dense-captioning events in videos. In: 2017 IEEE international conference on computer vision (ICCV), pp 706–715
28.
go back to reference Zhang Z, Xu D, Ouyang W, Tan C (2020) Show, tell and summarize: dense video captioning using visual cue aided sentence summarization. IEEE Trans Circuits Syst Video Technol 30(9):3130–3139 Zhang Z, Xu D, Ouyang W, Tan C (2020) Show, tell and summarize: dense video captioning using visual cue aided sentence summarization. IEEE Trans Circuits Syst Video Technol 30(9):3130–3139
Metadata
Title
Dense Video Captioning Using Video-Audio Features and Topic Modeling Based on Caption
Authors
Lakshmi Harika Palivela
S. Swetha
M. Nithish Guhan
M. Prasanna Venkatesh
Copyright Year
2023
Publisher
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-19-7753-4_40