Skip to main content
Erschienen in:
Buchtitelbild

2024 | OriginalPaper | Buchkapitel

Unsupervised 3D Articulated Object Correspondences with Part Approximation and Shape Refinement

verfasst von : Junqi Diao, Haiyong Jiang, Feilong Yan, Yong Zhang, Jinhui Luan, Jun Xiao

Erschienen in: Computer-Aided Design and Computer Graphics

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Reconstructing 3D human shapes with high-quality geometry as well as dense correspondences is important for many applications. Template fitting based methods can generate meshes with desired requirements but have difficulty in capturing high-quality details and accurate poses. The main challenge lies in the models have apparent discrepancies in different poses. Directly learning large-scale displacement of each point to account for different posed shapes is prone to artifacts and does not generalize well. Statistic representation based methods, can avoid artifacts by restricting human shapes to a limited shape expression space, which also makes it difficult to produce shape details. In this work, we propose a coarse-to-fine method to address the problem by dividing it into part approximation and shape refinement in an unsupervised manner. Our basic observation is that the poses of human parts account for most articulated shape variations and benefit pose generalization. Moreover, geometry details can be easily fitted once the part poses are estimated. At the coarse-fitting stage, we propose a part approximation network, to transform a template to fit inputs by a set of pose parameters. For refinement, we propose a shape refinement network, to fit shape details. Qualitative and quantitative studies on several datasets demonstrate that our method performs better than other unsupervised methods.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Amberg, B., Romdhani, S., Vetter, T.: Optimal step nonrigid ICP algorithms for surface registration. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 1–8. IEEE (2007) Amberg, B., Romdhani, S., Vetter, T.: Optimal step nonrigid ICP algorithms for surface registration. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 1–8. IEEE (2007)
2.
Zurück zum Zitat Anguelov, D., Srinivasan, P., Koller, D., Thrun, S., Rodgers, J., Davis, J.: Scape: shape completion and animation of people. In: ACM SIGGRAPH 2005 Papers, SIGGRAPH 2005, pp. 408–416. Association for Computing Machinery, New York (2005) Anguelov, D., Srinivasan, P., Koller, D., Thrun, S., Rodgers, J., Davis, J.: Scape: shape completion and animation of people. In: ACM SIGGRAPH 2005 Papers, SIGGRAPH 2005, pp. 408–416. Association for Computing Machinery, New York (2005)
3.
Zurück zum Zitat Atzmon, M., Lipman, Y.: SAL: sign agnostic learning of shapes from raw data. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 2565–2574 (2020) Atzmon, M., Lipman, Y.: SAL: sign agnostic learning of shapes from raw data. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 2565–2574 (2020)
4.
Zurück zum Zitat Aubry, M., Schlickewei, U., Cremers, D.: The wave kernel signature: a quantum mechanical approach to shape analysis. In: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 1626–1633. IEEE (2011) Aubry, M., Schlickewei, U., Cremers, D.: The wave kernel signature: a quantum mechanical approach to shape analysis. In: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 1626–1633. IEEE (2011)
5.
Zurück zum Zitat Bhatnagar, B.L., Sminchisescu, C., Theobalt, C., Pons-Moll, G.: Loopreg: self-supervised learning of implicit surface correspondences, pose and shape for 3D human mesh registration. In: NeurIPS, vol. 33 (2020) Bhatnagar, B.L., Sminchisescu, C., Theobalt, C., Pons-Moll, G.: Loopreg: self-supervised learning of implicit surface correspondences, pose and shape for 3D human mesh registration. In: NeurIPS, vol. 33 (2020)
6.
Zurück zum Zitat Bogo, F., Romero, J., Pons-Moll, G., Black, M.J.: Dynamic FAUST: registering human bodies in motion. In: IEEE Computer Vision and Pattern Recognition (CVPR) (2017) Bogo, F., Romero, J., Pons-Moll, G., Black, M.J.: Dynamic FAUST: registering human bodies in motion. In: IEEE Computer Vision and Pattern Recognition (CVPR) (2017)
7.
Zurück zum Zitat Chen, R., Cong, Y., Dong, J.: Unsupervised dense deformation embedding network for template-free shape correspondence. In: IEEE International Conference on Computer Vision (ICCV), pp. 8361–8370 (2021) Chen, R., Cong, Y., Dong, J.: Unsupervised dense deformation embedding network for template-free shape correspondence. In: IEEE International Conference on Computer Vision (ICCV), pp. 8361–8370 (2021)
8.
Zurück zum Zitat Chen, Z., Zhang, H.: Learning implicit fields for generative shape modeling. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 5939–5948 (2019) Chen, Z., Zhang, H.: Learning implicit fields for generative shape modeling. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 5939–5948 (2019)
9.
Zurück zum Zitat Eisenberger, M., Lahner, Z., Cremers, D.: Smooth shells: multi-scale shape registration with functional maps. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 12265–12274 (2020) Eisenberger, M., Lahner, Z., Cremers, D.: Smooth shells: multi-scale shape registration with functional maps. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 12265–12274 (2020)
10.
Zurück zum Zitat Feng, W., Zhang, J., Cai, H., Xu, H., Hou, J., Bao, H.: Recurrent multi-view alignment network for unsupervised surface registration. In: IEEE Computer Vision and Pattern Recognition (CVPR) (2021) Feng, W., Zhang, J., Cai, H., Xu, H., Hou, J., Bao, H.: Recurrent multi-view alignment network for unsupervised surface registration. In: IEEE Computer Vision and Pattern Recognition (CVPR) (2021)
11.
Zurück zum Zitat Gilani, S.Z., Mian, A., Shafait, F., Reid, I.: Dense 3D face correspondence. IEEE Trans. Pattern Anal. Mach. Intell. 40(7), 1584–1598 (2017)CrossRef Gilani, S.Z., Mian, A., Shafait, F., Reid, I.: Dense 3D face correspondence. IEEE Trans. Pattern Anal. Mach. Intell. 40(7), 1584–1598 (2017)CrossRef
13.
Zurück zum Zitat Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: 3D-coded: 3D correspondences by deep deformation. In: European Conference on Computer Vision (ECCV), pp. 230–246 (2018) Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: 3D-coded: 3D correspondences by deep deformation. In: European Conference on Computer Vision (ECCV), pp. 230–246 (2018)
14.
Zurück zum Zitat Hasler, N., Stoll, C., Sunkel, M., Rosenhahn, B., Seidel, H.P.: A statistical model of human pose and body shape. In: Computer Graphics Forum, vol. 28, pp. 337–346. Wiley Online Library (2009) Hasler, N., Stoll, C., Sunkel, M., Rosenhahn, B., Seidel, H.P.: A statistical model of human pose and body shape. In: Computer Graphics Forum, vol. 28, pp. 337–346. Wiley Online Library (2009)
15.
Zurück zum Zitat Jiang, H., Cai, J., Zheng, J.: Skeleton-aware 3D human shape reconstruction from point clouds. In: IEEE International Conference on Computer Vision (ICCV), pp. 5431–5441 (2019) Jiang, H., Cai, J., Zheng, J.: Skeleton-aware 3D human shape reconstruction from point clouds. In: IEEE International Conference on Computer Vision (ICCV), pp. 5431–5441 (2019)
16.
Zurück zum Zitat Kim, H., Kim, J., Kam, J., Park, J., Lee, S.: Deep virtual markers for articulated 3D shapes. In: IEEE International Conference on Computer Vision (ICCV), pp. 11615–11625 (2021) Kim, H., Kim, J., Kam, J., Park, J., Lee, S.: Deep virtual markers for articulated 3D shapes. In: IEEE International Conference on Computer Vision (ICCV), pp. 11615–11625 (2021)
17.
Zurück zum Zitat Li, C.L., Simon, T., Saragih, J., Póczos, B., Sheikh, Y.: LBS autoencoder: self-supervised fitting of articulated meshes to point clouds. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 11967–11976 (2019) Li, C.L., Simon, T., Saragih, J., Póczos, B., Sheikh, Y.: LBS autoencoder: self-supervised fitting of articulated meshes to point clouds. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 11967–11976 (2019)
18.
Zurück zum Zitat Liu, Z., Huang, J., Bu, S., Han, J., Tang, X., Li, X.: Template deformation-based 3-D reconstruction of full human body scans from low-cost depth cameras. IEEE Trans. Cybern. 47(3), 695–708 (2016)CrossRef Liu, Z., Huang, J., Bu, S., Han, J., Tang, X., Li, X.: Template deformation-based 3-D reconstruction of full human body scans from low-cost depth cameras. IEEE Trans. Cybern. 47(3), 695–708 (2016)CrossRef
19.
Zurück zum Zitat Ma, Q., et al.: Learning to dress 3D people in generative clothing. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 6469–6478 (2020) Ma, Q., et al.: Learning to dress 3D people in generative clothing. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 6469–6478 (2020)
20.
Zurück zum Zitat Marin, R., Melzi, S., Rodola, E., Castellani, U.: Farm: functional automatic registration method for 3D human bodies. In: Computer Graphics Forum, vol. 39, pp. 160–173. Wiley Online Library (2020) Marin, R., Melzi, S., Rodola, E., Castellani, U.: Farm: functional automatic registration method for 3D human bodies. In: Computer Graphics Forum, vol. 39, pp. 160–173. Wiley Online Library (2020)
21.
Zurück zum Zitat Pan, X., et al.: Predicting loose-fitting garment deformations using bone-driven motion networks. ACM Trans. Graph. (2022) Pan, X., et al.: Predicting loose-fitting garment deformations using bone-driven motion networks. ACM Trans. Graph. (2022)
22.
Zurück zum Zitat Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: deep learning on point sets for 3D classification and segmentation. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 652–660 (2017) Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: deep learning on point sets for 3D classification and segmentation. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 652–660 (2017)
24.
Zurück zum Zitat Saito, S., Yang, J., Ma, Q., Black, M.J.: Scanimate: weakly supervised learning of skinned clothed avatar networks. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 2886–2897 (2021) Saito, S., Yang, J., Ma, Q., Black, M.J.: Scanimate: weakly supervised learning of skinned clothed avatar networks. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 2886–2897 (2021)
25.
Zurück zum Zitat Sharma, A., Ovsjanikov, M.: Weakly supervised deep functional map for shape matching. In: NeurIPS (2020) Sharma, A., Ovsjanikov, M.: Weakly supervised deep functional map for shape matching. In: NeurIPS (2020)
26.
Zurück zum Zitat Sun, J., Ovsjanikov, M., Guibas, L.J.: A concise and provably informative multi-scale signature based on heat diffusion. Comput. Graph. Forum 28(5), 1383–1392 (2009)CrossRef Sun, J., Ovsjanikov, M., Guibas, L.J.: A concise and provably informative multi-scale signature based on heat diffusion. Comput. Graph. Forum 28(5), 1383–1392 (2009)CrossRef
27.
Zurück zum Zitat Tang, J., Xu, D., Jia, K., Zhang, L.: Learning parallel dense correspondence from spatio-temporal descriptors for efficient and robust 4D reconstruction. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 6022–6031 (2021) Tang, J., Xu, D., Jia, K., Zhang, L.: Learning parallel dense correspondence from spatio-temporal descriptors for efficient and robust 4D reconstruction. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 6022–6031 (2021)
28.
Zurück zum Zitat Wang, K., Xie, J., Zhang, G., Liu, L., Yang, J.: Sequential 3D human pose and shape estimation from point clouds. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 7275–7284 (2020) Wang, K., Xie, J., Zhang, G., Liu, L., Yang, J.: Sequential 3D human pose and shape estimation from point clouds. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 7275–7284 (2020)
29.
Zurück zum Zitat Wang, S., Geiger, A., Tang, S.: Locally aware piecewise transformation fields for 3D human mesh registration. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 7639–7648 (2021) Wang, S., Geiger, A., Tang, S.: Locally aware piecewise transformation fields for 3D human mesh registration. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 7639–7648 (2021)
30.
Zurück zum Zitat Wei, L., Huang, Q., Ceylan, D., Vouga, E., Li, H.: Dense human body correspondences using convolutional networks. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 1544–1553 (2016) Wei, L., Huang, Q., Ceylan, D., Vouga, E., Li, H.: Dense human body correspondences using convolutional networks. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 1544–1553 (2016)
31.
Zurück zum Zitat Yang, K., Chen, X.: Unsupervised learning for cuboid shape abstraction via joint segmentation from point clouds. ACM Trans. Graph. (2021) Yang, K., Chen, X.: Unsupervised learning for cuboid shape abstraction via joint segmentation from point clouds. ACM Trans. Graph. (2021)
32.
Zurück zum Zitat Yifan, W., Aigerman, N., Kim, V.G., Chaudhuri, S., Sorkine-Hornung, O.: Neural cages for detail-preserving 3D deformations. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 75–83 (2020) Yifan, W., Aigerman, N., Kim, V.G., Chaudhuri, S., Sorkine-Hornung, O.: Neural cages for detail-preserving 3D deformations. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 75–83 (2020)
33.
Zurück zum Zitat Zeng, Y., Qian, Y., Zhu, Z., Hou, J., Yuan, H., He, Y.: CorrNet3D: unsupervised end-to-end learning of dense correspondence for 3D point clouds. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 6052–6061 (2021) Zeng, Y., Qian, Y., Zhu, Z., Hou, J., Yuan, H., He, Y.: CorrNet3D: unsupervised end-to-end learning of dense correspondence for 3D point clouds. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 6052–6061 (2021)
34.
Zurück zum Zitat Zheng, Z., Yu, T., Wei, Y., Dai, Q., Liu, Y.: Deephuman: 3D human reconstruction from a single image. In: IEEE International Conference on Computer Vision (ICCV), pp. 7739–7749 (2019) Zheng, Z., Yu, T., Wei, Y., Dai, Q., Liu, Y.: Deephuman: 3D human reconstruction from a single image. In: IEEE International Conference on Computer Vision (ICCV), pp. 7739–7749 (2019)
35.
Zurück zum Zitat Zuffi, S., Kanazawa, A., Jacobs, D.W., Black, M.J.: 3D menagerie: modeling the 3D shape and pose of animals. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 6365–6373 (2017) Zuffi, S., Kanazawa, A., Jacobs, D.W., Black, M.J.: 3D menagerie: modeling the 3D shape and pose of animals. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 6365–6373 (2017)
Metadaten
Titel
Unsupervised 3D Articulated Object Correspondences with Part Approximation and Shape Refinement
verfasst von
Junqi Diao
Haiyong Jiang
Feilong Yan
Yong Zhang
Jinhui Luan
Jun Xiao
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-99-9666-7_1

Premium Partner