Skip to main content
Top

2020 | OriginalPaper | Chapter

Unsupervised Feature Propagation for Fast Video Object Detection Using Generative Adversarial Networks

Authors : Xuan Zhang, Guangxing Han, Wenduo He

Published in: MultiMedia Modeling

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

We propose unsupervised Feature Propagation Generative Adversarial Network (denoted as FPGAN) for fast video object detection in this paper. In our video object detector, we detect objects on spare key frames using pre-trained state-of-the-art object detector R-FCN, and propagate CNN features to adjacent frames for fast detection via a light-weight transformation network. To learn the feature propagation network, we make full use of unlabeled video data and employ generative adversarial networks in model training. Specifically, in FPGAN, the generator is the feature propagation network, and the discriminator employs second-order temporal coherence and 3D ConvNets to distinguish between predicted and “ground truth” CNN features. In addition, Euclidean distance loss provided by the pre-trained image object detector is also adopted to jointly supervise the learning. Our method doesn’t need any human labelling in videos. Experiments on the large-scale ImageNet VID dataset demonstrate the effectiveness of our method.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Girshick, R., Donahue, J., Darrell, T., Malik, J.: Region-based convolutional networks for accurate object detection and segmentation. In: CVPR, pp. 580–587 (2014) Girshick, R., Donahue, J., Darrell, T., Malik, J.: Region-based convolutional networks for accurate object detection and segmentation. In: CVPR, pp. 580–587 (2014)
2.
go back to reference Girshick, R.: Fast r-cnn. In: ICCV, pp. 1440–1448 (2015) Girshick, R.: Fast r-cnn. In: ICCV, pp. 1440–1448 (2015)
3.
go back to reference Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: towards real-time object detection with region proposal networks. In: NIPS, pp. 91–99 (2015) Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: towards real-time object detection with region proposal networks. In: NIPS, pp. 91–99 (2015)
4.
go back to reference Dai, J., Li, Y., He, K., Sun, J.: R-FCN: object detection via region-based fully convolutional networks. In: NIPS, pp. 379–387 (2016) Dai, J., Li, Y., He, K., Sun, J.: R-FCN: object detection via region-based fully convolutional networks. In: NIPS, pp. 379–387 (2016)
5.
go back to reference Han, G., Zhang, X., Li, C.: Revisiting faster R-CNN: a deeper look at region proposal network. In: ICONIP, pp. 14–24 (2017) Han, G., Zhang, X., Li, C.: Revisiting faster R-CNN: a deeper look at region proposal network. In: ICONIP, pp. 14–24 (2017)
6.
go back to reference Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: CVPR, pp. 7263–7271 (2017) Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: CVPR, pp. 7263–7271 (2017)
8.
go back to reference Han, G., Zhang, X., Li, C.: Single shot object detection with top-down refinement. In: ICIP, pp. 3360–3364 (2017) Han, G., Zhang, X., Li, C.: Single shot object detection with top-down refinement. In: ICIP, pp. 3360–3364 (2017)
9.
go back to reference Kang, K., et al.: Object detection in videos with tubelet proposal networks. In: CVPR, pp. 727–735 (2017) Kang, K., et al.: Object detection in videos with tubelet proposal networks. In: CVPR, pp. 727–735 (2017)
11.
go back to reference Zhu, X., Xiong, Y., Dai, J., Yuan, L., Wei, Y.: Deep feature flow for video recognition. In: CVPR, pp. 2349–2358 (2017) Zhu, X., Xiong, Y., Dai, J., Yuan, L., Wei, Y.: Deep feature flow for video recognition. In: CVPR, pp. 2349–2358 (2017)
13.
go back to reference Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)
14.
go back to reference He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)
15.
go back to reference Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR (2017) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR (2017)
17.
go back to reference Mathieu, M., Couprie, C., LeCun, Y.: Deep multi-scale video prediction beyond mean square error. In: ICLR (2016) Mathieu, M., Couprie, C., LeCun, Y.: Deep multi-scale video prediction beyond mean square error. In: ICLR (2016)
18.
go back to reference Goodfellow, I., et al.: Generative adversarial nets. In: NIPS (2014) Goodfellow, I., et al.: Generative adversarial nets. In: NIPS (2014)
19.
go back to reference Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., Webb, R.: Learning from simulated and unsupervised images through adversarial training. In: CVPR (2017) Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., Webb, R.: Learning from simulated and unsupervised images through adversarial training. In: CVPR (2017)
20.
go back to reference Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV (2017) Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV (2017)
21.
go back to reference Wiskott, L., Sejnowski, T.J.: Slow feature analysis: unsupervised learning of invariances. Neural Comput. 14(4), 715–770 (2002)CrossRef Wiskott, L., Sejnowski, T.J.: Slow feature analysis: unsupervised learning of invariances. Neural Comput. 14(4), 715–770 (2002)CrossRef
22.
go back to reference Jayaraman, D., Grauman, K.: Slow and steady feature analysis: higher order temporal coherence in video. In: CVPR (2016) Jayaraman, D., Grauman, K.: Slow and steady feature analysis: higher order temporal coherence in video. In: CVPR (2016)
23.
go back to reference Wang, X., Gupta, A.: Unsupervised learning of visual representations using videos. In: ICCV 2015 (2015) Wang, X., Gupta, A.: Unsupervised learning of visual representations using videos. In: ICCV 2015 (2015)
24.
go back to reference Lee, H.-Y., Huang, J.-B., Singh, M., Yang, M.-H.: Unsupervised representation learning by sorting sequences. In: ICCV (2017) Lee, H.-Y., Huang, J.-B., Singh, M., Yang, M.-H.: Unsupervised representation learning by sorting sequences. In: ICCV (2017)
25.
go back to reference Dosovitskiy, A., et al.. Flownet: learning optical flow with convolutional networks. In: ICCV (2015) Dosovitskiy, A., et al.. Flownet: learning optical flow with convolutional networks. In: ICCV (2015)
26.
go back to reference Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: ICCV (2015) Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: ICCV (2015)
27.
go back to reference Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: ICML (2017) Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: ICML (2017)
Metadata
Title
Unsupervised Feature Propagation for Fast Video Object Detection Using Generative Adversarial Networks
Authors
Xuan Zhang
Guangxing Han
Wenduo He
Copyright Year
2020
DOI
https://doi.org/10.1007/978-3-030-37731-1_50