Skip to main content
Top

2025 | OriginalPaper | Chapter

Explicitly Guided Information Interaction Network for Cross-Modal Point Cloud Completion

Authors : Hang Xu, Chen Long, Wenxiao Zhang, Yuan Liu, Zhen Cao, Zhen Dong, Bisheng Yang

Published in: Computer Vision – ECCV 2024

Publisher: Springer Nature Switzerland

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In this paper, we explore a novel framework, EGIInet (Explicitly Guided Information Interaction Network), a model for View-guided Point cloud Completion (ViPC) task, which aims to restore a complete point cloud from a partial one with a single view image. In comparison with previous methods that relied on the global semantics of input images, EGIInet efficiently combines the information from two modalities by leveraging the geometric nature of the completion task. Specifically, we propose an explicitly guided information interaction strategy supported by modal alignment for point cloud completion. First, in contrast to previous methods which simply use 2D and 3D backbones to encode features respectively, we unified the encoding process to promote modal alignment. Second, we propose a novel explicitly guided information interaction strategy that could help the network identify critical information within images, thus achieving better guidance for completion. Extensive experiments demonstrate the effectiveness of our framework, and we achieved a new state-of-the-art (+16% CD over XMFnet) in benchmark datasets despite using fewer parameters than the previous methods. The pre-trained model and code and are available at https://​github.​com/​WHU-USI3DV/​EGIInet.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Literature
1.
go back to reference Aiello, E., Valsesia, D., Magli, E.: Cross-modal learning for image-guided point cloud shape completion. Adv. Neural. Inf. Process. Syst. 35, 37349–37362 (2022) Aiello, E., Valsesia, D., Magli, E.: Cross-modal learning for image-guided point cloud shape completion. Adv. Neural. Inf. Process. Syst. 35, 37349–37362 (2022)
2.
go back to reference Berger, M., et al.: State of the art in surface reconstruction from point clouds. In: 35th Annual Conference of the European Association for Computer Graphics, Eurographics 2014-State of the Art Reports. No. CONF, The Eurographics Association (2014) Berger, M., et al.: State of the art in surface reconstruction from point clouds. In: 35th Annual Conference of the European Association for Computer Graphics, Eurographics 2014-State of the Art Reports. No. CONF, The Eurographics Association (2014)
3.
go back to reference Cao, Z., et al.: KT-Net: knowledge transfer for unpaired 3d shape completion. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 286–294 (2023) Cao, Z., et al.: KT-Net: knowledge transfer for unpaired 3d shape completion. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 286–294 (2023)
4.
go back to reference Chen, A., et al.: PiMAE: point cloud and image interactive masked autoencoders for 3D object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5291–5301 (2023) Chen, A., et al.: PiMAE: point cloud and image interactive masked autoencoders for 3D object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5291–5301 (2023)
5.
go back to reference Cui, R., et al.: P2C: self-supervised point cloud completion from single partial clouds. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14351–14360 (2023) Cui, R., et al.: P2C: self-supervised point cloud completion from single partial clouds. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14351–14360 (2023)
6.
go back to reference Cui, Y., et al.: Deep learning for image and point cloud fusion in autonomous driving: a review. IEEE Trans. Intell. Transp. Syst. 23(2), 722–739 (2021)CrossRef Cui, Y., et al.: Deep learning for image and point cloud fusion in autonomous driving: a review. IEEE Trans. Intell. Transp. Syst. 23(2), 722–739 (2021)CrossRef
7.
go back to reference Dai, A., Ruizhongtai Qi, C., Nießner, M.: Shape completion using 3D-encoder-predictor CNNs and shape synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5868–5877 (2017) Dai, A., Ruizhongtai Qi, C., Nießner, M.: Shape completion using 3D-encoder-predictor CNNs and shape synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5868–5877 (2017)
8.
go back to reference Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:​2010.​11929 (2020)
9.
go back to reference Du, Z., et al.: CDPNet: cross-modal dual phases network for point cloud completion. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pp. 1635–1643 (2024) Du, Z., et al.: CDPNet: cross-modal dual phases network for point cloud completion. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pp. 1635–1643 (2024)
10.
go back to reference Eldar, Y., Lindenbaum, M., Porat, M., Zeevi, Y.Y.: The farthest point strategy for progressive image sampling. IEEE Trans. Image Process. 6(9), 1305–1315 (1997)CrossRef Eldar, Y., Lindenbaum, M., Porat, M., Zeevi, Y.Y.: The farthest point strategy for progressive image sampling. IEEE Trans. Image Process. 6(9), 1305–1315 (1997)CrossRef
11.
go back to reference Fei, B., et al.: Comprehensive review of deep learning-based 3D point cloud completion processing and analysis. IEEE Trans. Intell. Transp. Syst. (2022) Fei, B., et al.: Comprehensive review of deep learning-based 3D point cloud completion processing and analysis. IEEE Trans. Intell. Transp. Syst. (2022)
12.
go back to reference Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)CrossRef Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)CrossRef
13.
go back to reference Gong, J., et al.: Optimization over disentangled encoding: unsupervised cross-domain point cloud completion via occlusion factor manipulation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13662, pp. 517–533. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-20086-1_30CrossRef Gong, J., et al.: Optimization over disentangled encoding: unsupervised cross-domain point cloud completion via occlusion factor manipulation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13662, pp. 517–533. Springer, Cham (2022). https://​doi.​org/​10.​1007/​978-3-031-20086-1_​30CrossRef
14.
go back to reference Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: A papier-mâché approach to learning 3d surface generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 216–224 (2018) Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: A papier-mâché approach to learning 3d surface generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 216–224 (2018)
15.
go back to reference Gu, Y., Wang, Y., Li, Y.: A survey on deep learning-driven remote sensing image scene understanding: Scene classification, scene retrieval and scene-guided object detection. Appl. Sci. 9(10), 2110 (2019)CrossRef Gu, Y., Wang, Y., Li, Y.: A survey on deep learning-driven remote sensing image scene understanding: Scene classification, scene retrieval and scene-guided object detection. Appl. Sci. 9(10), 2110 (2019)CrossRef
17.
go back to reference Han, X.F., Laga, H., Bennamoun, M.: Image-based 3D object reconstruction: state-of-the-art and trends in the deep learning era. IEEE Trans. Pattern Anal. Mach. Intell. 43(5), 1578–1604 (2019)CrossRef Han, X.F., Laga, H., Bennamoun, M.: Image-based 3D object reconstruction: state-of-the-art and trends in the deep learning era. IEEE Trans. Pattern Anal. Mach. Intell. 43(5), 1578–1604 (2019)CrossRef
18.
go back to reference Hong, S., Yavartanoo, M., Neshatavar, R., Lee, K.M.: ACL-SPC: adaptive closed-loop system for self-supervised point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9435–9444 (2023) Hong, S., Yavartanoo, M., Neshatavar, R., Lee, K.M.: ACL-SPC: adaptive closed-loop system for self-supervised point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9435–9444 (2023)
19.
go back to reference Hou, J., Dai, A., Nießner, M.: 3D-sis: 3D semantic instance segmentation of RGB-D scans. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4421–4430 (2019) Hou, J., Dai, A., Nießner, M.: 3D-sis: 3D semantic instance segmentation of RGB-D scans. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4421–4430 (2019)
20.
go back to reference Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017)
21.
go back to reference Huang, Z., Yu, Y., Xu, J., Ni, F., Le, X.: PF-Net: point fractal network for 3D point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020) Huang, Z., Yu, Y., Xu, J., Ni, F., Le, X.: PF-Net: point fractal network for 3D point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
22.
go back to reference Knapitsch, A., Park, J., Zhou, Q.Y., Koltun, V.: Tanks and temples: benchmarking large-scale scene reconstruction. ACM Trans. Graph. (ToG) 36(4), 1–13 (2017)CrossRef Knapitsch, A., Park, J., Zhou, Q.Y., Koltun, V.: Tanks and temples: benchmarking large-scale scene reconstruction. ACM Trans. Graph. (ToG) 36(4), 1–13 (2017)CrossRef
23.
go back to reference Li, S., Gao, P., Tan, X., Wei, M.: Proxyformer: proxy alignment assisted point cloud completion with missing part sensitive transformer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9466–9475 (2023) Li, S., Gao, P., Tan, X., Wei, M.: Proxyformer: proxy alignment assisted point cloud completion with missing part sensitive transformer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9466–9475 (2023)
24.
go back to reference Li, Y., et al.: Deep learning for lidar point clouds in autonomous driving: a review. IEEE Trans. Neural Netw. Learn. Syst. 32(8), 3412–3432 (2020)CrossRef Li, Y., et al.: Deep learning for lidar point clouds in autonomous driving: a review. IEEE Trans. Neural Netw. Learn. Syst. 32(8), 3412–3432 (2020)CrossRef
25.
go back to reference Liu, M., Sheng, L., Yang, S., Shao, J., Hu, S.M.: Morphing and sampling network for dense point cloud completion. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 11596–11603 (2020) Liu, M., Sheng, L., Yang, S., Shao, J., Hu, S.M.: Morphing and sampling network for dense point cloud completion. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 11596–11603 (2020)
27.
28.
go back to reference Lyu, Z., Kong, Z., Xu, X., Pan, L., Lin, D.: A conditional point diffusion-refinement paradigm for 3D point cloud completion. arXiv preprint arXiv:2112.03530 (2021) Lyu, Z., Kong, Z., Xu, X., Pan, L., Lin, D.: A conditional point diffusion-refinement paradigm for 3D point cloud completion. arXiv preprint arXiv:​2112.​03530 (2021)
29.
go back to reference Ma, Z., Liu, S.: A review of 3D reconstruction techniques in civil engineering and their applications. Adv. Eng. Inform. 37, 163–174 (2018)CrossRef Ma, Z., Liu, S.: A review of 3D reconstruction techniques in civil engineering and their applications. Adv. Eng. Inform. 37, 163–174 (2018)CrossRef
30.
go back to reference Nguyen, A., Le, B.: 3D point cloud segmentation: a survey. In: 2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM), pp. 225–230. IEEE (2013) Nguyen, A., Le, B.: 3D point cloud segmentation: a survey. In: 2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM), pp. 225–230. IEEE (2013)
32.
go back to reference Sarmad, M., Lee, H.J., Kim, Y.M.: RL-GAN-Net: a reinforcement learning agent controlled GAN network for real-time point cloud shape completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5898–5907 (2019) Sarmad, M., Lee, H.J., Kim, Y.M.: RL-GAN-Net: a reinforcement learning agent controlled GAN network for real-time point cloud shape completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5898–5907 (2019)
33.
go back to reference Tang, J., Gong, Z., Yi, R., Xie, Y., Ma, L.: LaKe-Net: topology-aware point cloud completion by localizing aligned keypoints. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1726–1735 (2022) Tang, J., Gong, Z., Yi, R., Xie, Y., Ma, L.: LaKe-Net: topology-aware point cloud completion by localizing aligned keypoints. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1726–1735 (2022)
34.
go back to reference Tchapmi, L.P., Kosaraju, V., Rezatofighi, H., Reid, I., Savarese, S.: TopNet: structural point cloud decoder. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Tchapmi, L.P., Kosaraju, V., Rezatofighi, H., Reid, I., Savarese, S.: TopNet: structural point cloud decoder. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
35.
go back to reference Wang, J., Cui, Y., Guo, D., Li, J., Liu, Q., Shen, C.: PointAttN: you only need attention for point cloud completion. arXiv preprint arXiv:2203.08485 (2022) Wang, J., Cui, Y., Guo, D., Li, J., Liu, Q., Shen, C.: PointAttN: you only need attention for point cloud completion. arXiv preprint arXiv:​2203.​08485 (2022)
36.
go back to reference Wang, X., , M.H.A.J., Lee, G.H.: Cascaded refinement network for point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020) Wang, X., , M.H.A.J., Lee, G.H.: Cascaded refinement network for point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
38.
go back to reference Wang, X., Ang Jr, M.H., Lee, G.H.: Cascaded refinement network for point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 790–799 (2020) Wang, X., Ang Jr, M.H., Lee, G.H.: Cascaded refinement network for point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 790–799 (2020)
39.
go back to reference Wen, X., et al.: PMP-Net: point cloud completion by learning multi-step point moving paths. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7443–7452 (2021) Wen, X., et al.: PMP-Net: point cloud completion by learning multi-step point moving paths. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7443–7452 (2021)
40.
go back to reference Wen, X., et al.: PMP-Net++: point cloud completion by transformer-enhanced multi-step point moving paths. IEEE Trans. Pattern Anal. Mach. Intell. 45(1), 852–867 (2022)CrossRef Wen, X., et al.: PMP-Net++: point cloud completion by transformer-enhanced multi-step point moving paths. IEEE Trans. Pattern Anal. Mach. Intell. 45(1), 852–867 (2022)CrossRef
41.
go back to reference Xiang, P., et al.: SnowflakeNet: point cloud completion by snowflake point deconvolution with skip-transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5499–5509 (2021) Xiang, P., et al.: SnowflakeNet: point cloud completion by snowflake point deconvolution with skip-transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5499–5509 (2021)
42.
go back to reference Xiang, P., et al.: SnowflakeNet: point cloud completion by snowflake point deconvolution with skip-transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5499–5509 (2021) Xiang, P., et al.: SnowflakeNet: point cloud completion by snowflake point deconvolution with skip-transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5499–5509 (2021)
44.
go back to reference Xie, Y., Tian, J., Zhu, X.X.: Linking points with labels in 3D: a review of point cloud semantic segmentation. IEEE Geosci. Remote Sens. Mag. 8(4), 38–59 (2020)CrossRef Xie, Y., Tian, J., Zhu, X.X.: Linking points with labels in 3D: a review of point cloud semantic segmentation. IEEE Geosci. Remote Sens. Mag. 8(4), 38–59 (2020)CrossRef
45.
go back to reference Xu, Y., Stilla, U.: Toward building and civil infrastructure reconstruction from point clouds: a review on data and key techniques. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 14, 2857–2885 (2021)CrossRef Xu, Y., Stilla, U.: Toward building and civil infrastructure reconstruction from point clouds: a review on data and key techniques. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 14, 2857–2885 (2021)CrossRef
46.
go back to reference Yang, G., Huang, X., Hao, Z., Liu, M.Y., Belongie, S., Hariharan, B.: PointFlow: 3D point cloud generation with continuous normalizing flows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4541–4550 (2019) Yang, G., Huang, X., Hao, Z., Liu, M.Y., Belongie, S., Hariharan, B.: PointFlow: 3D point cloud generation with continuous normalizing flows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4541–4550 (2019)
47.
go back to reference Yang, Y., Feng, C., Shen, Y., Tian, D.: Foldingnet: interpretable unsupervised learning on 3D point clouds. arXiv preprint arXiv:1712.07262 2(3), 5 (2017) Yang, Y., Feng, C., Shen, Y., Tian, D.: Foldingnet: interpretable unsupervised learning on 3D point clouds. arXiv preprint arXiv:​1712.​07262 2(3), 5 (2017)
48.
go back to reference Ying, H., Shao, T., Wang, H., Yang, Y., Zhou, K.: Adaptive local basis functions for shape completion. In: ACM SIGGRAPH 2023 Conference Proceedings, pp. 1–11 (2023) Ying, H., Shao, T., Wang, H., Yang, Y., Zhou, K.: Adaptive local basis functions for shape completion. In: ACM SIGGRAPH 2023 Conference Proceedings, pp. 1–11 (2023)
49.
go back to reference Yu, X., Rao, Y., Wang, Z., Liu, Z., Lu, J., Zhou, J.: Pointr: Diverse point cloud completion with geometry-aware transformers. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 12498–12507 (2021) Yu, X., Rao, Y., Wang, Z., Liu, Z., Lu, J., Zhou, J.: Pointr: Diverse point cloud completion with geometry-aware transformers. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 12498–12507 (2021)
50.
go back to reference Yu, X., Rao, Y., Wang, Z., Lu, J., Zhou, J.: Adapointr: diverse point cloud completion with adaptive geometry-aware transformers. arXiv preprint arXiv:2301.04545 (2023) Yu, X., Rao, Y., Wang, Z., Lu, J., Zhou, J.: Adapointr: diverse point cloud completion with adaptive geometry-aware transformers. arXiv preprint arXiv:​2301.​04545 (2023)
51.
go back to reference Yuan, W., Khot, T., Held, D., Mertz, C., Hebert, M.: PCN: point completion network. In: 2018 International Conference on 3D Vision (3DV), pp. 728–737. IEEE (2018) Yuan, W., Khot, T., Held, D., Mertz, C., Hebert, M.: PCN: point completion network. In: 2018 International Conference on 3D Vision (3DV), pp. 728–737. IEEE (2018)
52.
go back to reference Zhang, J., et al.: Unsupervised 3D shape completion through GAN inversion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1768–1777 (2021) Zhang, J., et al.: Unsupervised 3D shape completion through GAN inversion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1768–1777 (2021)
53.
go back to reference Zhang, W., Dong, Z., Liu, J., Yan, Q., Xiao, C., et al.: Point cloud completion via skeleton-detail transformer. IEEE Trans. Vis. Comput. Graph. (2022) Zhang, W., Dong, Z., Liu, J., Yan, Q., Xiao, C., et al.: Point cloud completion via skeleton-detail transformer. IEEE Trans. Vis. Comput. Graph. (2022)
55.
go back to reference Zhang, X., et al.: View-guided point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15890–15899 (2021) Zhang, X., et al.: View-guided point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15890–15899 (2021)
57.
go back to reference Zhu, Z., Chen, H., He, X., Wang, W., Qin, J., Wei, M.: SvdFormer: complementing point cloud via self-view augmentation and self-structure dual-generator. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14508–14518 (2023) Zhu, Z., Chen, H., He, X., Wang, W., Qin, J., Wei, M.: SvdFormer: complementing point cloud via self-view augmentation and self-structure dual-generator. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14508–14518 (2023)
58.
go back to reference Zhu, Z., et al.: CSDN: cross-modal shape-transfer dual-refinement network for point cloud completion. IEEE Trans. Vis. Comput. Graph. (2023) Zhu, Z., et al.: CSDN: cross-modal shape-transfer dual-refinement network for point cloud completion. IEEE Trans. Vis. Comput. Graph. (2023)
Metadata
Title
Explicitly Guided Information Interaction Network for Cross-Modal Point Cloud Completion
Authors
Hang Xu
Chen Long
Wenxiao Zhang
Yuan Liu
Zhen Cao
Zhen Dong
Bisheng Yang
Copyright Year
2025
DOI
https://doi.org/10.1007/978-3-031-73254-6_24

Premium Partner