Skip to main content
Top
Published in: International Journal of Computer Vision 1/2021

01-08-2020

View Transfer on Human Skeleton Pose: Automatically Disentangle the View-Variant and View-Invariant Information for Pose Representation Learning

Authors: Qiang Nie, Yunhui Liu

Published in: International Journal of Computer Vision | Issue 1/2021

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Learning a good pose representation is significant for many applications, such as human pose estimation and action recognition. However, the representations learned by most approaches are not intrinsic and their transferability in different datasets and different tasks is limited. In this paper, we introduce a method to learn a versatile representation, which is capable of recovering unseen corrupted skeletons, being applied to the human action recognition, and transferring pose from one view to another view without knowing the relationships of cameras. To this end, a sequential bidirectional recursive network (SeBiReNet) is proposed for modeling kinematic dependency between skeleton joints. Utilizing the SeBiReNet as the core module, a denoising autoencoder is designed to learn intrinsic pose features through the task of recovering corrupted skeletons. Instead of only extracting the view-invariant feature as many other methods, we disentangle the view-invariant feature from the view-variant feature in the latent space and use them together as a representation of the human pose. For a better feature disentanglement, an adversarial augmentation strategy is proposed and applied to the denoising autoencoder. Disentanglement of view-variant and view-invariant features enables us to realize view transfer on 3D poses. Extensive experiments on different datasets and different tasks verify the effectiveness and versatility of the learned representation.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Literature
go back to reference Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798–1828.CrossRef Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798–1828.CrossRef
go back to reference Caetano, C., Brémond, F., & Schwartz, W. R. (2019). Skeleton image representation for 3D action recognition based on tree structure and reference joints. In 2019 32nd SIBGRAPI conference on graphics, patterns and images (SIBGRAPI) (pp. 16–23). IEEE. Caetano, C., Brémond, F., & Schwartz, W. R. (2019). Skeleton image representation for 3D action recognition based on tree structure and reference joints. In 2019 32nd SIBGRAPI conference on graphics, patterns and images (SIBGRAPI) (pp. 16–23). IEEE.
go back to reference Demisse, G. G., Papadopoulos, K., Aouada, D., & Ottersten, B. (2018). Pose encoding for robust skeleton-based action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp 188–194). Demisse, G. G., Papadopoulos, K., Aouada, D., & Ottersten, B. (2018). Pose encoding for robust skeleton-based action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp 188–194).
go back to reference Ding, W., Liu, K., Belyaev, E., & Cheng, F. (2018). Tensor-based linear dynamical systems for action recognition from 3D skeletons. Pattern Recognition, 77, 75–86.CrossRef Ding, W., Liu, K., Belyaev, E., & Cheng, F. (2018). Tensor-based linear dynamical systems for action recognition from 3D skeletons. Pattern Recognition, 77, 75–86.CrossRef
go back to reference Du, Y., Wang, W., & Wang, L. (2015). Hierarchical recurrent neural network for skeleton based action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1110–1118). Du, Y., Wang, W., & Wang, L. (2015). Hierarchical recurrent neural network for skeleton based action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1110–1118).
go back to reference Holden, D., Saito, J., Komura, T., & Joyce, T. (2015). Learning motion manifolds with convolutional autoencoders. In SIGGRAPH Asia 2015 Technical Briefs (pp 1–4). Holden, D., Saito, J., Komura, T., & Joyce, T. (2015). Learning motion manifolds with convolutional autoencoders. In SIGGRAPH Asia 2015 Technical Briefs (pp 1–4).
go back to reference Huang, Z., Wan, C., Probst, T., & Van Gool, L. (2017). Deep learning on lie groups for skeleton-based action recognition. In Proceedings of the 2017 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 6099–6108). IEEE Computer Society. Huang, Z., Wan, C., Probst, T., & Van Gool, L. (2017). Deep learning on lie groups for skeleton-based action recognition. In Proceedings of the 2017 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 6099–6108). IEEE Computer Society.
go back to reference Hussein, M. E., Torki, M., Gowayyed, M. A., & El-Saban, M. (2013). Human action recognition using a temporal hierarchy of covariance descriptors on 3D joint locations. IJCAI, 13, 2466–2472. Hussein, M. E., Torki, M., Gowayyed, M. A., & El-Saban, M. (2013). Human action recognition using a temporal hierarchy of covariance descriptors on 3D joint locations. IJCAI, 13, 2466–2472.
go back to reference Irsoy, O., & Cardie, C. (2014). Deep recursive neural networks for compositionality in language. In Advances in neural information processing systems (pp. 2096–210). Irsoy, O., & Cardie, C. (2014). Deep recursive neural networks for compositionality in language. In Advances in neural information processing systems (pp. 2096–210).
go back to reference Johansson, G. (1973). Visual perception of biological motion and a model for its analysis. Perception & Psychophysics, 14(2), 201–211.CrossRef Johansson, G. (1973). Visual perception of biological motion and a model for its analysis. Perception & Psychophysics, 14(2), 201–211.CrossRef
go back to reference Ke, Q., An, S., Bennamoun, M., Sohel, F., & Boussaid, F. (2017a). Skeletonnet: Mining deep part features for 3-d action recognition. IEEE Signal Processing Letters, 24(6), 731–735.CrossRef Ke, Q., An, S., Bennamoun, M., Sohel, F., & Boussaid, F. (2017a). Skeletonnet: Mining deep part features for 3-d action recognition. IEEE Signal Processing Letters, 24(6), 731–735.CrossRef
go back to reference Ke, Q., Bennamoun, M., An, S., Sohel, F., & Boussaid, F. (2017b). A new representation of skeleton sequences for 3d action recognition. In 2017 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 4570–4579). IEEE. Ke, Q., Bennamoun, M., An, S., Sohel, F., & Boussaid, F. (2017b). A new representation of skeleton sequences for 3d action recognition. In 2017 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 4570–4579). IEEE.
go back to reference Kundu, J. N., Gor, M., Uppala, P. K., & Radhakrishnan, V. B. (2019). Unsupervised feature learning of human actions as trajectories in pose embedding manifold. In 2019 IEEE winter conference on applications of computer vision (WACV) (pp. 1459–1467). IEEE. Kundu, J. N., Gor, M., Uppala, P. K., & Radhakrishnan, V. B. (2019). Unsupervised feature learning of human actions as trajectories in pose embedding manifold. In 2019 IEEE winter conference on applications of computer vision (WACV) (pp. 1459–1467). IEEE.
go back to reference Li, C., Hou, Y., Wang, P., & Li, W. (2017). Joint distance maps based action recognition with convolutional neural networks. IEEE Signal Processing Letters, 24(5), 624–628.CrossRef Li, C., Hou, Y., Wang, P., & Li, W. (2017). Joint distance maps based action recognition with convolutional neural networks. IEEE Signal Processing Letters, 24(5), 624–628.CrossRef
go back to reference Li, J., Wong, Y., Zhao, Q., & Kankanhalli, M. (2018). Unsupervised learning of view-invariant action representations. In Advances in neural information processing systems (pp. 1254–1264). Li, J., Wong, Y., Zhao, Q., & Kankanhalli, M. (2018). Unsupervised learning of view-invariant action representations. In Advances in neural information processing systems (pp. 1254–1264).
go back to reference Liao, S., Lyons, T., Yang, W., & Ni, H. (2019). Learning stochastic differential equations using rnn with log signature features. arXiv preprint arXiv:1908.08286. Liao, S., Lyons, T., Yang, W., & Ni, H. (2019). Learning stochastic differential equations using rnn with log signature features. arXiv preprint arXiv:​1908.​08286.
go back to reference Liu, J., Shahroudy, A., Wang, G., Duan, L. Y., & Chichung, A. K. (2019b). Skeleton-based online action prediction using scale selection network. IEEE Transactions on Pattern Analysis and Machine Intelligence. Liu, J., Shahroudy, A., Wang, G., Duan, L. Y., & Chichung, A. K. (2019b). Skeleton-based online action prediction using scale selection network. IEEE Transactions on Pattern Analysis and Machine Intelligence.
go back to reference Liu, J., Shahroudy, A., Xu, D., Kot, A. C., & Wang, G. (2017a). Skeleton-based action recognition using spatio-temporal lstm network with trust gates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12), 3007–3021.CrossRef Liu, J., Shahroudy, A., Xu, D., Kot, A. C., & Wang, G. (2017a). Skeleton-based action recognition using spatio-temporal lstm network with trust gates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12), 3007–3021.CrossRef
go back to reference Liu, J., Shahroudy, A., Xu, D., & Wang, G. (2016). Spatio-temporal LSTM with trust gates for 3D human action recognition. In European conference on computer vision (pp. 816–833). Berlin: Springer. Liu, J., Shahroudy, A., Xu, D., & Wang, G. (2016). Spatio-temporal LSTM with trust gates for 3D human action recognition. In European conference on computer vision (pp. 816–833). Berlin: Springer.
go back to reference Liu, J., Wang, G., Duan, L. Y., Abdiyeva, K., & Kot, A. C. (2018). Skeleton-based human action recognition with global context-aware attention LSTM networks. IEEE Transactions on Image Processing, 27(4), 1586–1599.MathSciNetCrossRef Liu, J., Wang, G., Duan, L. Y., Abdiyeva, K., & Kot, A. C. (2018). Skeleton-based human action recognition with global context-aware attention LSTM networks. IEEE Transactions on Image Processing, 27(4), 1586–1599.MathSciNetCrossRef
go back to reference Liu, M., Liu, H., & Chen, C. (2017b). Enhanced skeleton visualization for view invariant human action recognition. Pattern Recognition, 68, 346–362.CrossRef Liu, M., Liu, H., & Chen, C. (2017b). Enhanced skeleton visualization for view invariant human action recognition. Pattern Recognition, 68, 346–362.CrossRef
go back to reference Luo, Z., Peng, B., Huang, D. A., Alahi, A., & Fei-Fei, L. (2017). Unsupervised learning of long-term motion dynamics for videos. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2203–2212). Luo, Z., Peng, B., Huang, D. A., Alahi, A., & Fei-Fei, L. (2017). Unsupervised learning of long-term motion dynamics for videos. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2203–2212).
go back to reference Moreno-Noguer, F. (2017). 3D human pose estimation from a single image via distance matrix regression. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2823–2832). Moreno-Noguer, F. (2017). 3D human pose estimation from a single image via distance matrix regression. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2823–2832).
go back to reference Nie, Q., Wang, J., Wang, X., & Liu, Y. (2019). View-invariant human action recognition based on a 3D bio-constrained skeleton model. IEEE Transactions on Image Processing., 28(8), 3959–3972.MathSciNetCrossRef Nie, Q., Wang, J., Wang, X., & Liu, Y. (2019). View-invariant human action recognition based on a 3D bio-constrained skeleton model. IEEE Transactions on Image Processing., 28(8), 3959–3972.MathSciNetCrossRef
go back to reference Papadopoulos, K., Ghorbel, E., Aouada, D., & Ottersten, B. (2019). Vertex feature encoding and hierarchical temporal modeling in a spatial-temporal graph convolutional network for action recognition. arXiv preprint arXiv:1912.09745. Papadopoulos, K., Ghorbel, E., Aouada, D., & Ottersten, B. (2019). Vertex feature encoding and hierarchical temporal modeling in a spatial-temporal graph convolutional network for action recognition. arXiv preprint arXiv:​1912.​09745.
go back to reference Rahmani, H., Mahmood, A., Huynh, D., & Mian, A. (2016). Histogram of oriented principal components for cross-view action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(12), 2430–2443.CrossRef Rahmani, H., Mahmood, A., Huynh, D., & Mian, A. (2016). Histogram of oriented principal components for cross-view action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(12), 2430–2443.CrossRef
go back to reference Ramakrishna, V., Kanade, T., & Sheikh, Y. (2012). Reconstructing 3D human pose from 2D image landmarks. In European conference on computer vision (pp. 573–586). Berlin: Springer. Ramakrishna, V., Kanade, T., & Sheikh, Y. (2012). Reconstructing 3D human pose from 2D image landmarks. In European conference on computer vision (pp. 573–586). Berlin: Springer.
go back to reference Shahroudy, A., Liu, J., Ng, T. T., & Wang, G. (2016). NTU RGB+D: A large scale dataset for 3D human activity analysis. In IEEE conference on computer vision and pattern recognition. Shahroudy, A., Liu, J., Ng, T. T., & Wang, G. (2016). NTU RGB+D: A large scale dataset for 3D human activity analysis. In IEEE conference on computer vision and pattern recognition.
go back to reference Socher, R., Lin, C. C., Manning, C., & Ng, A. Y. (2011). Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th international conference on machine learning (ICML-11) (pp. 129–136). Socher, R., Lin, C. C., Manning, C., & Ng, A. Y. (2011). Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th international conference on machine learning (ICML-11) (pp. 129–136).
go back to reference Socher, R., Manning, C. D., & Ng, A. Y. (2010). Learning continuous phrase representations and syntactic parsing with recursive neural networks. In: Proceedings of the NIPS-2010 deep learning and unsupervised feature learning workshop (Vol. 2010, pp. 1–9). Socher, R., Manning, C. D., & Ng, A. Y. (2010). Learning continuous phrase representations and syntactic parsing with recursive neural networks. In: Proceedings of the NIPS-2010 deep learning and unsupervised feature learning workshop (Vol. 2010, pp. 1–9).
go back to reference Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C. D., Ng, A., & Potts, C. (2013). Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing (pp. 1631–1642). Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C. D., Ng, A., & Potts, C. (2013). Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing (pp. 1631–1642).
go back to reference Sun, X., Shang, J., Liang, S., & Wei, Y. (2017). Compositional human pose regression. In Proceedings of the IEEE international conference on computer vision (pp. 2602–2611). Sun, X., Shang, J., Liang, S., & Wei, Y. (2017). Compositional human pose regression. In Proceedings of the IEEE international conference on computer vision (pp. 2602–2611).
go back to reference Vemulapalli, R., Arrate, F., & Chellappa, R. (2014). Human action recognition by representing 3D skeletons as points in a lie group. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 588–595). Vemulapalli, R., Arrate, F., & Chellappa, R. (2014). Human action recognition by representing 3D skeletons as points in a lie group. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 588–595).
go back to reference Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., & Manzagol, P. A. (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(Dec), 3371–3408.MathSciNetMATH Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., & Manzagol, P. A. (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(Dec), 3371–3408.MathSciNetMATH
go back to reference Wang, H., & Wang, L. (2017). Modeling temporal dynamics and spatial configurations of actions using two-stream recurrent neural networks. In e Conference on computer vision and pattern recognition (CVPR). Wang, H., & Wang, L. (2017). Modeling temporal dynamics and spatial configurations of actions using two-stream recurrent neural networks. In e Conference on computer vision and pattern recognition (CVPR).
go back to reference Wang, H., & Wang, L. (2018). Learning content and style: Joint action recognition and person identification from human skeletons. Pattern Recognition, 81, 23–35.CrossRef Wang, H., & Wang, L. (2018). Learning content and style: Joint action recognition and person identification from human skeletons. Pattern Recognition, 81, 23–35.CrossRef
go back to reference Wang, J., Liu, Z., Wu, Y., & Yuan, J. (2013). Learning actionlet ensemble for 3D human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 914–927.CrossRef Wang, J., Liu, Z., Wu, Y., & Yuan, J. (2013). Learning actionlet ensemble for 3D human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 914–927.CrossRef
go back to reference Wang, J., Liu, Z., Wu, Y., & Yuan, J. (2014a). Learning actionlet ensemble for 3D human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 914–927.CrossRef Wang, J., Liu, Z., Wu, Y., & Yuan, J. (2014a). Learning actionlet ensemble for 3D human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 914–927.CrossRef
go back to reference Wang, J., Nie, X., Xia, Y., Wu, Y., & Zhu, S. C. (2014b). Cross-view action modeling, learning and recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2649–2656). Wang, J., Nie, X., Xia, Y., Wu, Y., & Zhu, S. C. (2014b). Cross-view action modeling, learning and recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2649–2656).
go back to reference Wei, S., Song, Y., & Zhang, Y. (2017). Human skeleton tree recurrent neural network with joint relative motion feature for skeleton based action recognition. In 2017 IEEE international conference on image processing (ICIP) (pp. 91–95). IEEE. Wei, S., Song, Y., & Zhang, Y. (2017). Human skeleton tree recurrent neural network with joint relative motion feature for skeleton based action recognition. In 2017 IEEE international conference on image processing (ICIP) (pp. 91–95). IEEE.
go back to reference Xia, L., Chen, C. C., & Aggarwal, J. (2012). View invariant human action recognition using histograms of 3D joints. In 2012 IEEE computer society conference on computer vision and pattern recognition workshops (CVPRW) (pp. 20–27). IEEE. Xia, L., Chen, C. C., & Aggarwal, J. (2012). View invariant human action recognition using histograms of 3D joints. In 2012 IEEE computer society conference on computer vision and pattern recognition workshops (CVPRW) (pp. 20–27). IEEE.
go back to reference Yang, X., & Tian, Y. L. (2012). Eigenjoints-based action recognition using naive-bayes-nearest-neighbor. In 2012 IEEE computer society conference on Computer vision and pattern recognition workshops (CVPRW) (pp. 14–19). IEEE. Yang, X., & Tian, Y. L. (2012). Eigenjoints-based action recognition using naive-bayes-nearest-neighbor. In 2012 IEEE computer society conference on Computer vision and pattern recognition workshops (CVPRW) (pp. 14–19). IEEE.
go back to reference Yoshiyasu, Y., Sagawa, R., Ayusawa, K., & Murai, A. (2018). Skeleton transformer networks: 3D human pose and skinned mesh from single RGB image. arXiv preprint arXiv:1812.11328. Yoshiyasu, Y., Sagawa, R., Ayusawa, K., & Murai, A. (2018). Skeleton transformer networks: 3D human pose and skinned mesh from single RGB image. arXiv preprint arXiv:​1812.​11328.
go back to reference Yu, T. H., Kim, T. K., & Cipolla, R. (2013). Unconstrained monocular 3D human pose estimation by action detection and cross-modality regression forest. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3642–3649). Yu, T. H., Kim, T. K., & Cipolla, R. (2013). Unconstrained monocular 3D human pose estimation by action detection and cross-modality regression forest. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3642–3649).
go back to reference Zhang, P., Lan, C., Xing, J., Zeng, W., Xue, J., & Zheng, N. (2019). View adaptive neural networks for high performance skeleton-based human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence., 41(8), 1963–1978.CrossRef Zhang, P., Lan, C., Xing, J., Zeng, W., Xue, J., & Zheng, N. (2019). View adaptive neural networks for high performance skeleton-based human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence., 41(8), 1963–1978.CrossRef
go back to reference Zheng, N., Wen, J., Liu, R., Long, L., Dai, J., & Gong, Z. (2018). Unsupervised representation learning with long-term dynamics for skeleton based action recognition. In Thirty-Second AAAI conference on artificial intelligence. Zheng, N., Wen, J., Liu, R., Long, L., Dai, J., & Gong, Z. (2018). Unsupervised representation learning with long-term dynamics for skeleton based action recognition. In Thirty-Second AAAI conference on artificial intelligence.
go back to reference Zhou, X., Huang, Q., Sun, X., Xue, X., & Wei, Y. (2017). Weaklysupervised transfer for 3D human pose estimation in the wild. In IEEE international conference on computer vision, ICCV (vol. 3, p. 7). Zhou, X., Huang, Q., Sun, X., Xue, X., & Wei, Y. (2017). Weaklysupervised transfer for 3D human pose estimation in the wild. In IEEE international conference on computer vision, ICCV (vol. 3, p. 7).
Metadata
Title
View Transfer on Human Skeleton Pose: Automatically Disentangle the View-Variant and View-Invariant Information for Pose Representation Learning
Authors
Qiang Nie
Yunhui Liu
Publication date
01-08-2020
Publisher
Springer US
Published in
International Journal of Computer Vision / Issue 1/2021
Print ISSN: 0920-5691
Electronic ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-020-01354-7

Other articles of this Issue 1/2021

International Journal of Computer Vision 1/2021 Go to the issue

Premium Partner