Skip to main content
Top
Published in: International Journal of Computer Vision 8/2019

12-02-2019

Video Enhancement with Task-Oriented Flow

Authors: Tianfan Xue, Baian Chen, Jiajun Wu, Donglai Wei, William T. Freeman

Published in: International Journal of Computer Vision | Issue 8/2019

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Many video enhancement algorithms rely on optical flow to register frames in a video sequence. Precise flow estimation is however intractable; and optical flow itself is often a sub-optimal representation for particular video processing tasks. In this paper, we propose task-oriented flow (TOFlow), a motion representation learned in a self-supervised, task-specific manner. We design a neural network with a trainable motion estimation component and a video processing component, and train them jointly to learn the task-oriented flow. For evaluation, we build Vimeo-90K, a large-scale, high-quality video dataset for low-level video processing. TOFlow outperforms traditional optical flow on standard benchmarks as well as our Vimeo-90K dataset in three video processing tasks: frame interpolation, video denoising/deblocking, and video super-resolution.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Footnotes
1
Note that Fixed Flow or TOFlow only uses 4-level structure of SpyNet for memory efficiency, while the original SpyNet network has 5 levels.
 
2
We did not evaluate AdaConv on DVF dataset, as neither the implementation of AdaConv nor the DVF dataset is publicly available.
 
3
The EPE of Fixed Flow on Sintel dataset is different from EPE of SpyNet (Ranjan and Black 2017) reported on Sintel website, as it is trained differently from SpyNet as we mentioned before.
 
Literature
go back to reference Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., & Vijayanarasimhan, S. (2016). Youtube-8m: A large-scale video classification benchmark. arXiv:1609.08675. Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., & Vijayanarasimhan, S. (2016). Youtube-8m: A large-scale video classification benchmark. arXiv:​1609.​08675.
go back to reference Ahn, N., Kang, B., & Sohn, K. A. (2018). Fast, accurate, and, lightweight super-resolution with cascading residual network. In European conference on computer vision. Ahn, N., Kang, B., & Sohn, K. A. (2018). Fast, accurate, and, lightweight super-resolution with cascading residual network. In European conference on computer vision.
go back to reference Aittala, M., & Durand, F. (2018). Burst image deblurring using permutation invariant convolutional neural networks. In European conference on computer vision (pp 731–747). Aittala, M., & Durand, F. (2018). Burst image deblurring using permutation invariant convolutional neural networks. In European conference on computer vision (pp 731–747).
go back to reference Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M. J., & Szeliski, R. (2011). A database and evaluation methodology for optical flow. International Journal of Computer Vision, 92(1), 1–31.CrossRef Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M. J., & Szeliski, R. (2011). A database and evaluation methodology for optical flow. International Journal of Computer Vision, 92(1), 1–31.CrossRef
go back to reference Brox, T., Bruhn, A., Papenberg, N., & Weickert, J. (2004). High accuracy optical flow estimation based on a theory for warping. In European conference on computer vision. Brox, T., Bruhn, A., Papenberg, N., & Weickert, J. (2004). High accuracy optical flow estimation based on a theory for warping. In European conference on computer vision.
go back to reference Brox, T., Bregler, C., & Malik, J. (2009). Large displacement optical flow. In IEEE conference on computer vision and pattern recognition. Brox, T., Bregler, C., & Malik, J. (2009). Large displacement optical flow. In IEEE conference on computer vision and pattern recognition.
go back to reference Bulat, A., Yang, J., & Tzimiropoulos, G. (2018). To learn image super-resolution, use a gan to learn how to do image degradation first. In European conference on computer vision. Bulat, A., Yang, J., & Tzimiropoulos, G. (2018). To learn image super-resolution, use a gan to learn how to do image degradation first. In European conference on computer vision.
go back to reference Butler, D. J., Wulff, J., Stanley, G. B., & Black, M. J. (2012). A naturalistic open source movie for optical flow evaluation. In European conference on computer vision (pp. 611–625). Butler, D. J., Wulff, J., Stanley, G. B., & Black, M. J. (2012). A naturalistic open source movie for optical flow evaluation. In European conference on computer vision (pp. 611–625).
go back to reference Caballero, J., Ledig, C., Aitken, A., Acosta, A., Totz, J., Wang, Z., Shi, W. (2017). Real-time video super-resolution with spatio-temporal networks and motion compensation. In IEEE conference on computer vision and pattern recognition. Caballero, J., Ledig, C., Aitken, A., Acosta, A., Totz, J., Wang, Z., Shi, W. (2017). Real-time video super-resolution with spatio-temporal networks and motion compensation. In IEEE conference on computer vision and pattern recognition.
go back to reference Fischer, P., Dosovitskiy, A., Ilg, E., Häusser, P., Hazırbaş, C., Golkov, V., van der Smagt, P., Cremers, D., & Brox, T. (2015). Flownet: Learning optical flow with convolutional networks. In IEEE international conference on computer vision. Fischer, P., Dosovitskiy, A., Ilg, E., Häusser, P., Hazırbaş, C., Golkov, V., van der Smagt, P., Cremers, D., & Brox, T. (2015). Flownet: Learning optical flow with convolutional networks. In IEEE international conference on computer vision.
go back to reference Ganin, Y., Kononenko, D., Sungatullina, D., & Lempitsky, V. (2016). Deepwarp: Photorealistic image resynthesis for gaze manipulation. In European conference on computer vision. Ganin, Y., Kononenko, D., Sungatullina, D., & Lempitsky, V. (2016). Deepwarp: Photorealistic image resynthesis for gaze manipulation. In European conference on computer vision.
go back to reference Ghoniem, M., Chahir, Y., & Elmoataz, A. (2010). Nonlocal video denoising, simplification and inpainting using discrete regularization on graphs. Signal Process, 90(8), 2445–2455.CrossRefMATH Ghoniem, M., Chahir, Y., & Elmoataz, A. (2010). Nonlocal video denoising, simplification and inpainting using discrete regularization on graphs. Signal Process, 90(8), 2445–2455.CrossRefMATH
go back to reference Godard, C., Matzen, K., & Uyttendaele, M. (2017). Deep burst denoising. In European conference on computer vision. Godard, C., Matzen, K., & Uyttendaele, M. (2017). Deep burst denoising. In European conference on computer vision.
go back to reference Horn, B. K., & Schunck, B. G. (1981). Determining optical flow. Artif Intell, 17(1–3), 185–203.CrossRef Horn, B. K., & Schunck, B. G. (1981). Determining optical flow. Artif Intell, 17(1–3), 185–203.CrossRef
go back to reference Huang, Y., Wang, W., & Wang, L. (2015). Bidirectional recurrent convolutional networks for multi-frame super-resolution. In Advances in neural information processing systems. Huang, Y., Wang, W., & Wang, L. (2015). Bidirectional recurrent convolutional networks for multi-frame super-resolution. In Advances in neural information processing systems.
go back to reference Jaderberg, M., Simonyan, K., & Zisserman, A., et al. (2015). Spatial transformer networks. In Advances in neural information processing systems. Jaderberg, M., Simonyan, K., & Zisserman, A., et al. (2015). Spatial transformer networks. In Advances in neural information processing systems.
go back to reference Jiang, H., Sun, D., Jampani, V., Yang, M. H., Learned-Miller, E., Kautz, J. (2017). Super slomo: High quality estimation of multiple intermediate frames for video interpolation. In IEEE conference on computer vision and pattern recognition. Jiang, H., Sun, D., Jampani, V., Yang, M. H., Learned-Miller, E., Kautz, J. (2017). Super slomo: High quality estimation of multiple intermediate frames for video interpolation. In IEEE conference on computer vision and pattern recognition.
go back to reference Jiang, X., Le Pendu, M., & Guillemot, C. (2018). Depth estimation with occlusion handling from a sparse set of light field views. In IEEE international conference on image processing. Jiang, X., Le Pendu, M., & Guillemot, C. (2018). Depth estimation with occlusion handling from a sparse set of light field views. In IEEE international conference on image processing.
go back to reference Jo, Y., Oh, S. W., Kang, J., & Kim, S. J. (2018). Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation. In IEEE conference on computer vision and pattern recognition (pp. 3224–3232). Jo, Y., Oh, S. W., Kang, J., & Kim, S. J. (2018). Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation. In IEEE conference on computer vision and pattern recognition (pp. 3224–3232).
go back to reference Kappeler, A., Yoo, S., Dai, Q., & Katsaggelos, A. K. (2016). Video super-resolution with convolutional neural networks. IEEE Transactions on Computational Imaging, 2(2), 109–122.MathSciNetCrossRef Kappeler, A., Yoo, S., Dai, Q., & Katsaggelos, A. K. (2016). Video super-resolution with convolutional neural networks. IEEE Transactions on Computational Imaging, 2(2), 109–122.MathSciNetCrossRef
go back to reference Kingma, D. P., & Ba, J. (2015). Adam: A method for stochastic optimization. In International conference on learning representations. Kingma, D. P., & Ba, J. (2015). Adam: A method for stochastic optimization. In International conference on learning representations.
go back to reference Li, M., Xie, Q., Zhao, Q., Wei, W., Gu, S., Tao, J., & Meng, D. (2018). Video rain streak removal by multiscale convolutional sparse coding. In IEEE conference on computer vision and pattern recognition (pp. 6644–6653). Li, M., Xie, Q., Zhao, Q., Wei, W., Gu, S., Tao, J., & Meng, D. (2018). Video rain streak removal by multiscale convolutional sparse coding. In IEEE conference on computer vision and pattern recognition (pp. 6644–6653).
go back to reference Liao, R., Tao, X., Li, R., Ma, Z., & Jia, J. (2015). Video super-resolution via deep draft-ensemble learning. In IEEE conference on computer vision and pattern recognition. Liao, R., Tao, X., Li, R., Ma, Z., & Jia, J. (2015). Video super-resolution via deep draft-ensemble learning. In IEEE conference on computer vision and pattern recognition.
go back to reference Liu, C., & Freeman, W. (2010). A high-quality video denoising algorithm based on reliable motion estimation. In European conference on computer vision. Liu, C., & Freeman, W. (2010). A high-quality video denoising algorithm based on reliable motion estimation. In European conference on computer vision.
go back to reference Liu, C., & Sun, D. (2011). A bayesian approach to adaptive video super resolution. In IEEE conference on computer vision and pattern recognition. Liu, C., & Sun, D. (2011). A bayesian approach to adaptive video super resolution. In IEEE conference on computer vision and pattern recognition.
go back to reference Liu, C., & Sun, D. (2014). On bayesian adaptive video super resolution. IEEE Transactions on Pattern Analysis and Machine intelligence, 36(2), 346–360.CrossRef Liu, C., & Sun, D. (2014). On bayesian adaptive video super resolution. IEEE Transactions on Pattern Analysis and Machine intelligence, 36(2), 346–360.CrossRef
go back to reference Liu, Z., Yeh, R., Tang, X., Liu, Y., & Agarwala, A. (2017). Video frame synthesis using deep voxel flow. In IEEE international conference on computer vision. Liu, Z., Yeh, R., Tang, X., Liu, Y., & Agarwala, A. (2017). Video frame synthesis using deep voxel flow. In IEEE international conference on computer vision.
go back to reference Lu, G., Ouyang, W., Xu, D., Zhang, X., Gao, Z., & Sun, M. T. (2018). Deep Kalman filtering network for video compression artifact reduction. In European conference on computer vision (pp. 568–584). Lu, G., Ouyang, W., Xu, D., Zhang, X., Gao, Z., & Sun, M. T. (2018). Deep Kalman filtering network for video compression artifact reduction. In European conference on computer vision (pp. 568–584).
go back to reference Ma, Z., Liao, R., Tao, X., Xu, L., Jia, J., & Wu, E. (2015). Handling motion blur in multi-frame super-resolution. In IEEE conference on computer vision and pattern recognition. Ma, Z., Liao, R., Tao, X., Xu, L., Jia, J., & Wu, E. (2015). Handling motion blur in multi-frame super-resolution. In IEEE conference on computer vision and pattern recognition.
go back to reference Maggioni, M., Boracchi, G., Foi, A., & Egiazarian, K. (2012). Video denoising, deblocking, and enhancement through separable 4-d nonlocal spatiotemporal transforms. IEEE Transactions on Image Processing, 21(9), 3952–3966.MathSciNetCrossRefMATH Maggioni, M., Boracchi, G., Foi, A., & Egiazarian, K. (2012). Video denoising, deblocking, and enhancement through separable 4-d nonlocal spatiotemporal transforms. IEEE Transactions on Image Processing, 21(9), 3952–3966.MathSciNetCrossRefMATH
go back to reference Makansi, O., Ilg, E., & Brox, T. (2017). End-to-end learning of video super-resolution with motion compensation. In German conference on pattern recognition. Makansi, O., Ilg, E., & Brox, T. (2017). End-to-end learning of video super-resolution with motion compensation. In German conference on pattern recognition.
go back to reference Mathieu, M., Couprie, C., & LeCun, Y. (2016). Deep multi-scale video prediction beyond mean square error. In International conference on learning representations. Mathieu, M., Couprie, C., & LeCun, Y. (2016). Deep multi-scale video prediction beyond mean square error. In International conference on learning representations.
go back to reference Mémin, E., & Pérez, P. (1998). Dense estimation and object-based segmentation of the optical flow with robust techniques. IEEE Transactions on Image Processing, 7(5), 703–719.CrossRef Mémin, E., & Pérez, P. (1998). Dense estimation and object-based segmentation of the optical flow with robust techniques. IEEE Transactions on Image Processing, 7(5), 703–719.CrossRef
go back to reference Mildenhall, B., Barron, J. T., Chen, J., Sharlet, D., Ng, R., & Carroll, R. (2018). Burst denoising with kernel prediction networks. In IEEE conference on computer vision and pattern recognition. Mildenhall, B., Barron, J. T., Chen, J., Sharlet, D., Ng, R., & Carroll, R. (2018). Burst denoising with kernel prediction networks. In IEEE conference on computer vision and pattern recognition.
go back to reference Nasrollahi, K., & Moeslund, T. B. (2014). Super-resolution: A comprehensive survey. Machine Vision and Applications, 25(6), 1423–1468.CrossRef Nasrollahi, K., & Moeslund, T. B. (2014). Super-resolution: A comprehensive survey. Machine Vision and Applications, 25(6), 1423–1468.CrossRef
go back to reference Niklaus, S., & Liu, F. (2018). Context-aware synthesis for video frame interpolation. In IEEE conference on computer vision and pattern recognition. Niklaus, S., & Liu, F. (2018). Context-aware synthesis for video frame interpolation. In IEEE conference on computer vision and pattern recognition.
go back to reference Niklaus, S., Mai, L., & Liu, F. (2017a). Video frame interpolation via adaptive convolution. In IEEE conference on computer vision and pattern recognition. Niklaus, S., Mai, L., & Liu, F. (2017a). Video frame interpolation via adaptive convolution. In IEEE conference on computer vision and pattern recognition.
go back to reference Niklaus, S., Mai, L., & Liu, F. (2017b). Video frame interpolation via adaptive separable convolution. In IEEE international conference on computer vision. Niklaus, S., Mai, L., & Liu, F. (2017b). Video frame interpolation via adaptive separable convolution. In IEEE international conference on computer vision.
go back to reference Oliva, A., & Torralba, A. (2001). Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision, 42(3), 145–175.CrossRefMATH Oliva, A., & Torralba, A. (2001). Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision, 42(3), 145–175.CrossRefMATH
go back to reference Ranjan, A., & Black, M. J. (2017). Optical flow estimation using a spatial pyramid network. In IEEE conference on computer vision and pattern recognition. Ranjan, A., & Black, M. J. (2017). Optical flow estimation using a spatial pyramid network. In IEEE conference on computer vision and pattern recognition.
go back to reference Revaud, J., Weinzaepfel, P., Harchaoui, Z., & Schmid, C. (2015). Epicflow: Edge-preserving interpolation of correspondences for optical flow. In IEEE conference on computer vision and pattern recognition. Revaud, J., Weinzaepfel, P., Harchaoui, Z., & Schmid, C. (2015). Epicflow: Edge-preserving interpolation of correspondences for optical flow. In IEEE conference on computer vision and pattern recognition.
go back to reference Sajjadi, M. S., Vemulapalli, R., & Brown, M. (2018). Frame-recurrent video super-resolution. In IEEE conference on computer vision and pattern recognition (pp. 6626–6634). Sajjadi, M. S., Vemulapalli, R., & Brown, M. (2018). Frame-recurrent video super-resolution. In IEEE conference on computer vision and pattern recognition (pp. 6626–6634).
go back to reference Tao, X., Gao, H., Liao, R., Wang, J., & Jia, J. (2017). Detail-revealing deep video super-resolution. In IEEE international conference on computer vision. Tao, X., Gao, H., Liao, R., Wang, J., & Jia, J. (2017). Detail-revealing deep video super-resolution. In IEEE international conference on computer vision.
go back to reference Varghese, G., & Wang, Z. (2010). Video denoising based on a spatiotemporal gaussian scale mixture model. IEEE Transactions on Circuits and Systems for Video Technology, 20(7), 1032–1040.CrossRef Varghese, G., & Wang, Z. (2010). Video denoising based on a spatiotemporal gaussian scale mixture model. IEEE Transactions on Circuits and Systems for Video Technology, 20(7), 1032–1040.CrossRef
go back to reference Wang, T. C., Zhu, J. Y., Kalantari, N. K., Efros, A. A., & Ramamoorthi, R. (2017). Light field video capture using a learning-based hybrid imaging system. In SIGGRAPH. Wang, T. C., Zhu, J. Y., Kalantari, N. K., Efros, A. A., & Ramamoorthi, R. (2017). Light field video capture using a learning-based hybrid imaging system. In SIGGRAPH.
go back to reference Wedel, A., Cremers, D., Pock, T., & Bischof, H. (2009). Structure-and motion-adaptive regularization for high accuracy optic flow. In IEEE conference on computer vision and pattern recognition. Wedel, A., Cremers, D., Pock, T., & Bischof, H. (2009). Structure-and motion-adaptive regularization for high accuracy optic flow. In IEEE conference on computer vision and pattern recognition.
go back to reference Wen, B., Li, Y., Pfister, L., & Bresler, Y. (2017). Joint adaptive sparsity and low-rankness on the fly: An online tensor reconstruction scheme for video denoising. In IEEE International Conference on Computer Vision (ICCV). Wen, B., Li, Y., Pfister, L., & Bresler, Y. (2017). Joint adaptive sparsity and low-rankness on the fly: An online tensor reconstruction scheme for video denoising. In IEEE International Conference on Computer Vision (ICCV).
go back to reference Werlberger, M., Pock, T., Unger, M., & Bischof, H. (2011). Optical flow guided tv-l1 video interpolation and restoration. In International conference on energy minimization methods in computer vision and pattern recognition. Werlberger, M., Pock, T., Unger, M., & Bischof, H. (2011). Optical flow guided tv-l1 video interpolation and restoration. In International conference on energy minimization methods in computer vision and pattern recognition.
go back to reference Xu, J., Ranftl, R., & Koltun, V. (2017). Accurate optical flow via direct cost volume processing. In IEEE conference on computer vision and pattern recognition. Xu, J., Ranftl, R., & Koltun, V. (2017). Accurate optical flow via direct cost volume processing. In IEEE conference on computer vision and pattern recognition.
go back to reference Xu, S., Zhang, F., He, X., Shen, X., & Zhang, X. (2015). Pm-pm: Patchmatch with potts model for object segmentation and stereo matching. IEEE Transactions on Image Processing, 24(7), 2182–2196.MathSciNetCrossRefMATH Xu, S., Zhang, F., He, X., Shen, X., & Zhang, X. (2015). Pm-pm: Patchmatch with potts model for object segmentation and stereo matching. IEEE Transactions on Image Processing, 24(7), 2182–2196.MathSciNetCrossRefMATH
go back to reference Yang, R., Xu, M., Wang, Z., & Li, T. (2018). Multi-frame quality enhancement for compressed video. In IEEE conference on computer vision and pattern recognition (pp 6664–6673). Yang, R., Xu, M., Wang, Z., & Li, T. (2018). Multi-frame quality enhancement for compressed video. In IEEE conference on computer vision and pattern recognition (pp 6664–6673).
go back to reference Yu, J. J., Harley, A. W., & Derpanis, K. G. (2016). Back to basics: Unsupervised learning of optical flow via brightness constancy and motion smoothness. In European conference on computer vision workshops. Yu, J. J., Harley, A. W., & Derpanis, K. G. (2016). Back to basics: Unsupervised learning of optical flow via brightness constancy and motion smoothness. In European conference on computer vision workshops.
go back to reference Yu, Z., Li, H., Wang, Z., Hu, Z., & Chen, C. W. (2013). Multi-level video frame interpolation: Exploiting the interaction among different levels. IEEE Transactions on Circuits and Systems for Video Technology, 23(7), 1235–1248.CrossRef Yu, Z., Li, H., Wang, Z., Hu, Z., & Chen, C. W. (2013). Multi-level video frame interpolation: Exploiting the interaction among different levels. IEEE Transactions on Circuits and Systems for Video Technology, 23(7), 1235–1248.CrossRef
go back to reference Zhou, T., Tulsiani, S., Sun, W., Malik, J., & Efros, A. A. (2016). View synthesis by appearance flow. In European conference on computer vision. Zhou, T., Tulsiani, S., Sun, W., Malik, J., & Efros, A. A. (2016). View synthesis by appearance flow. In European conference on computer vision.
go back to reference Zhu, X., Wang, Y., Dai, J., Yuan, L., & Wei, Y. (2017). Flow-guided feature aggregation for video object detection. In IEEE International Conference on Computer Vision. Zhu, X., Wang, Y., Dai, J., Yuan, L., & Wei, Y. (2017). Flow-guided feature aggregation for video object detection. In IEEE International Conference on Computer Vision.
go back to reference Zitnick, C. L., Kang, S. B., Uyttendaele, M., Winder, S., & Szeliski, R. (2004). High-quality video view interpolation using a layered representation. ACM Transactions on Graphics, 23(3), 600–608.CrossRef Zitnick, C. L., Kang, S. B., Uyttendaele, M., Winder, S., & Szeliski, R. (2004). High-quality video view interpolation using a layered representation. ACM Transactions on Graphics, 23(3), 600–608.CrossRef
Metadata
Title
Video Enhancement with Task-Oriented Flow
Authors
Tianfan Xue
Baian Chen
Jiajun Wu
Donglai Wei
William T. Freeman
Publication date
12-02-2019
Publisher
Springer US
Published in
International Journal of Computer Vision / Issue 8/2019
Print ISSN: 0920-5691
Electronic ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-018-01144-2

Other articles of this Issue 8/2019

International Journal of Computer Vision 8/2019 Go to the issue

Premium Partner