Skip to main content
Erschienen in: International Journal of Computer Vision 5/2022

18.03.2022

RIConv++: Effective Rotation Invariant Convolutions for 3D Point Clouds Deep Learning

verfasst von: Zhiyuan Zhang, Binh-Son Hua, Sai-Kit Yeung

Erschienen in: International Journal of Computer Vision | Ausgabe 5/2022

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

3D point clouds deep learning is a promising field of research that allows a neural network to learn features of point clouds directly, making it a robust tool for solving 3D scene understanding tasks. While recent works show that point cloud convolutions can be invariant to translation and point permutation, investigations of the rotation invariance property for point cloud convolution has been so far scarce. Some existing methods perform point cloud convolutions with rotation-invariant features, existing methods generally do not perform as well as translation-invariant only counterpart. In this work, we argue that a key reason is that compared to point coordinates, rotation-invariant features consumed by point cloud convolution are not as distinctive. To address this problem, we propose a simple yet effective convolution operator that enhances feature distinction by designing powerful rotation invariant features from the local regions. We consider the relationship between the point of interest and its neighbors as well as the internal relationship of the neighbors to largely improve the feature descriptiveness. Our network architecture can capture both local and global context by simply tuning the neighborhood size in each convolution layer. We conduct several experiments on synthetic and real-world point cloud classifications, part segmentation, and shape retrieval to evaluate our method, which achieves the state-of-the-art accuracy under challenging rotations.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Armeni, I., Sener, O., Zamir, A.R., Jiang, H., Brilakis, I., Fischer, M., & Savarese, S. (2016). 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 1534–1543). Armeni, I., Sener, O., Zamir, A.R., Jiang, H., Brilakis, I., Fischer, M., & Savarese, S. (2016). 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 1534–1543).
Zurück zum Zitat Bai, S., Bai, X., Zhou, Z., Zhang, Z., & Jan Latecki, L. (2016). Gift: A real-time and scalable 3d shape search engine. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 5023–5032). Bai, S., Bai, X., Zhou, Z., Zhang, Z., & Jan Latecki, L. (2016). Gift: A real-time and scalable 3d shape search engine. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 5023–5032).
Zurück zum Zitat Chang, A.X., Funkhouser, T.A., Guibas, L.J., Hanrahan, P., Huang, Q.X., Li, Z., Savarese, S., Savva, M., Song, S., Su, H., Xiao, J., Yi, L., & Yu, F. (2015). Shapenet: an information-rich 3d model repository. arXiv preprint arXiv:151203012 Chang, A.X., Funkhouser, T.A., Guibas, L.J., Hanrahan, P., Huang, Q.X., Li, Z., Savarese, S., Savva, M., Song, S., Su, H., Xiao, J., Yi, L., & Yu, F. (2015). Shapenet: an information-rich 3d model repository. arXiv preprint arXiv:​151203012
Zurück zum Zitat Chen, C., Li, G., Xu, R., Chen, T., Wang, M., & Lin, L. (2019). Clusternet: Deep hierarchical cluster network with rigorously rotation-invariant representation for point cloud analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 4994–5002). Chen, C., Li, G., Xu, R., Chen, T., Wang, M., & Lin, L. (2019). Clusternet: Deep hierarchical cluster network with rigorously rotation-invariant representation for point cloud analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 4994–5002).
Zurück zum Zitat Curless, B., & Levoy, M. (1996). A volumetric method for building complex models from range images. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, (pp. 303–312). Curless, B., & Levoy, M. (1996). A volumetric method for building complex models from range images. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, (pp. 303–312).
Zurück zum Zitat Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., & Niessner, M. (2017). Scannet: richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 5828–5839). Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., & Niessner, M. (2017). Scannet: richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 5828–5839).
Zurück zum Zitat Deng, H., Birdal, T., & Ilic, S. (2018). Ppf-foldnet: unsupervised learning of rotation invariant 3d local descriptors. In Proceedings of the european conference on computer vision, (pp. 602–618) Deng, H., Birdal, T., & Ilic, S. (2018). Ppf-foldnet: unsupervised learning of rotation invariant 3d local descriptors. In Proceedings of the european conference on computer vision, (pp. 602–618)
Zurück zum Zitat Esteves, C., Allen-Blanchette, C., Makadia, & A., Daniilidis, K. (2018a). Learning so (3) equivariant representations with spherical cnns. In Proceedings of the European Conference on Computer Vision (ECCV), (pp. 52–68). Esteves, C., Allen-Blanchette, C., Makadia, & A., Daniilidis, K. (2018a). Learning so (3) equivariant representations with spherical cnns. In Proceedings of the European Conference on Computer Vision (ECCV), (pp. 52–68).
Zurück zum Zitat Esteves, C., Allen-Blanchette, C., Zhou. X., & Daniilidis, K. (2018b). Polar transformer networks. In International conference on learning representations. Esteves, C., Allen-Blanchette, C., Zhou. X., & Daniilidis, K. (2018b). Polar transformer networks. In International conference on learning representations.
Zurück zum Zitat Esteves, C., Xu, Y., Allen-Blanchette, C., & Daniilidis, K. (2019). Equivariant multi-view networks. In Proceedings of the IEEE international conference on computer vision, (pp. 1568–1577). Esteves, C., Xu, Y., Allen-Blanchette, C., & Daniilidis, K. (2019). Equivariant multi-view networks. In Proceedings of the IEEE international conference on computer vision, (pp. 1568–1577).
Zurück zum Zitat Furuya, T., & Ohbuchi, R. (2016). Deep aggregation of local 3d geometric features for 3d model retrieval. In BMVC, (Vol. 7, p. 8). Furuya, T., & Ohbuchi, R. (2016). Deep aggregation of local 3d geometric features for 3d model retrieval. In BMVC, (Vol. 7, p. 8).
Zurück zum Zitat Guo, Y., Sohel, F., Bennamoun, M., Lu, M., & Wan, J. (2013). Rotational projection statistics for 3d local surface description and object recognition. International Journal of Computer Vision, 105(1), 63–86.MathSciNetCrossRef Guo, Y., Sohel, F., Bennamoun, M., Lu, M., & Wan, J. (2013). Rotational projection statistics for 3d local surface description and object recognition. International Journal of Computer Vision, 105(1), 63–86.MathSciNetCrossRef
Zurück zum Zitat Guo, Y., Wang, H., Hu, Q., Liu, H., Liu, L., & Bennamoun, M. (2020). Deep learning for 3d point clouds: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(12), 4338–4364. Guo, Y., Wang, H., Hu, Q., Liu, H., Liu, L., & Bennamoun, M. (2020). Deep learning for 3d point clouds: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(12), 4338–4364.
Zurück zum Zitat Hua, B.S., Pham, Q.H., Nguyen, D.T., Tran, M.K., Yu, L.F., & Yeung, S.K. (2016). Scenenn: A scene meshes dataset with annotations. In: 2016 fourth international conference on 3D vision, (pp. 92–101). Hua, B.S., Pham, Q.H., Nguyen, D.T., Tran, M.K., Yu, L.F., & Yeung, S.K. (2016). Scenenn: A scene meshes dataset with annotations. In: 2016 fourth international conference on 3D vision, (pp. 92–101).
Zurück zum Zitat Hua, B.S., Tran, M.K., & Yeung, S.K. (2018). Pointwise convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 984–993). Hua, B.S., Tran, M.K., & Yeung, S.K. (2018). Pointwise convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 984–993).
Zurück zum Zitat Kim, J., Jung, W., Kim, H., & Lee, J. (2020a). Cycnn: a rotation invariant cnn using polar mapping and cylindrical convolution layers. arXiv preprint arXiv:2007.10588 Kim, J., Jung, W., Kim, H., & Lee, J. (2020a). Cycnn: a rotation invariant cnn using polar mapping and cylindrical convolution layers. arXiv preprint arXiv:​2007.​10588
Zurück zum Zitat Kim, S., Park, J., & Han, B. (2020b). Rotation-invariant local-to-global representation learning for 3d point cloud. Advances in Neural Information Processing Systems, 33, 8174–8185. Kim, S., Park, J., & Han, B. (2020b). Rotation-invariant local-to-global representation learning for 3d point cloud. Advances in Neural Information Processing Systems, 33, 8174–8185.
Zurück zum Zitat Klokov, R., & Lempitsky, V. (2017). Escape from cells: Deep kd-networks for the recognition of 3d point cloud models. In Proceedings of the IEEE international conference on computer vision, (pp. 863–872). Klokov, R., & Lempitsky, V. (2017). Escape from cells: Deep kd-networks for the recognition of 3d point cloud models. In Proceedings of the IEEE international conference on computer vision, (pp. 863–872).
Zurück zum Zitat Laptev, D., Savinov, N., Buhmann, J.M., & Pollefeys, M. (2016). Ti-pooling: transformation-invariant pooling for feature learning in convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 289–297). Laptev, D., Savinov, N., Buhmann, J.M., & Pollefeys, M. (2016). Ti-pooling: transformation-invariant pooling for feature learning in convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 289–297).
Zurück zum Zitat Li, Y., Pirk, S., Su, H., Qi, C.R., & Guibas, L.J. (2016). Fpnn: Field probing neural networks for 3d data. Advances in Neural Information Processing Systems, 29, 307–315. Li, Y., Pirk, S., Su, H., Qi, C.R., & Guibas, L.J. (2016). Fpnn: Field probing neural networks for 3d data. Advances in Neural Information Processing Systems, 29, 307–315.
Zurück zum Zitat Li, Y., Bu, R., Sun, M., Wu, W., Di, X., & Chen, B. (2018). Pointcnn: Convolution on x-transformed points. Advances in Neural Information Processing Systems, 31, 820–830. Li, Y., Bu, R., Sun, M., Wu, W., Di, X., & Chen, B. (2018). Pointcnn: Convolution on x-transformed points. Advances in Neural Information Processing Systems, 31, 820–830.
Zurück zum Zitat Liu, Y., Fan, B., Xiang, S., & Pan, C. (2019). Relation-shape convolutional neural network for point cloud analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 8895–8904). Liu, Y., Fan, B., Xiang, S., & Pan, C. (2019). Relation-shape convolutional neural network for point cloud analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 8895–8904).
Zurück zum Zitat Maturana, D., & Scherer, S. (2015). Voxnet: A 3d convolutional neural network for real-time object recognition. In 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS), (pp. 922–928). Maturana, D., & Scherer, S. (2015). Voxnet: A 3d convolutional neural network for real-time object recognition. In 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS), (pp. 922–928).
Zurück zum Zitat Mian, A., Bennamoun, M., & Owens, R. (2010). On the repeatability and quality of keypoints for local feature-based 3d object retrieval from cluttered scenes. International Journal of Computer Vision, 89(2–3), 348–361.CrossRef Mian, A., Bennamoun, M., & Owens, R. (2010). On the repeatability and quality of keypoints for local feature-based 3d object retrieval from cluttered scenes. International Journal of Computer Vision, 89(2–3), 348–361.CrossRef
Zurück zum Zitat Poulenard, A., Rakotosaona, M.J., Ponty, Y., & Ovsjanikov, M. (2019). Effective rotation-invariant point cnn with spherical harmonics kernels. In International conference on 3D Vision Poulenard, A., Rakotosaona, M.J., Ponty, Y., & Ovsjanikov, M. (2019). Effective rotation-invariant point cnn with spherical harmonics kernels. In International conference on 3D Vision
Zurück zum Zitat Qi, C.R., Su, H., Nießner, M., Dai, A., Yan, M., & Guibas, L.J. (2016). Volumetric and multi-view cnns for object classification on 3d data. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 5648–5656). Qi, C.R., Su, H., Nießner, M., Dai, A., Yan, M., & Guibas, L.J. (2016). Volumetric and multi-view cnns for object classification on 3d data. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 5648–5656).
Zurück zum Zitat Qi, C.R., Su, H., Mo, K., & Guibas, L.J. (2017a). Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 652–660). Qi, C.R., Su, H., Mo, K., & Guibas, L.J. (2017a). Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 652–660).
Zurück zum Zitat Qi, C.R., Yi, L., Su, H., & Guibas, L.J. (2017b). Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, (pp. 5105–5114). Qi, C.R., Yi, L., Su, H., & Guibas, L.J. (2017b). Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, (pp. 5105–5114).
Zurück zum Zitat Rao, Y., Lu, J., & Zhou, J. (2019). Spherical fractal convolutional neural networks for point cloud recognition. In Computer vision and pattern recognition Rao, Y., Lu, J., & Zhou, J. (2019). Spherical fractal convolutional neural networks for point cloud recognition. In Computer vision and pattern recognition
Zurück zum Zitat Riegler, G., Osman Ulusoy, A., & Geiger, A. (2017). Octnet: Learning deep 3d representations at high resolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 3577–3586). Riegler, G., Osman Ulusoy, A., & Geiger, A. (2017). Octnet: Learning deep 3d representations at high resolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 3577–3586).
Zurück zum Zitat Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: convolutional networks for biomedical image segmentation. In International conference on medical image computing and computer-assisted intervention, (pp. 234–241). Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: convolutional networks for biomedical image segmentation. In International conference on medical image computing and computer-assisted intervention, (pp. 234–241).
Zurück zum Zitat Savva, M., Yu, F., Su, H., Aono, M., Chen, B., Cohen-Or, D., Deng, W., Su, H., Bai, S., & Bai, X., et al. (2016). Shrec16 track: largescale 3d shape retrieval from shapenet core55. In Proceedings of the eurographics workshop on 3D object retrieval, (vol 10). Savva, M., Yu, F., Su, H., Aono, M., Chen, B., Cohen-Or, D., Deng, W., Su, H., Bai, S., & Bai, X., et al. (2016). Shrec16 track: largescale 3d shape retrieval from shapenet core55. In Proceedings of the eurographics workshop on 3D object retrieval, (vol 10).
Zurück zum Zitat Su, H., Maji, S., Kalogerakis, E., & Learned-Miller, E. (2015). Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE international conference on computer vision, (pp. 945–953). Su, H., Maji, S., Kalogerakis, E., & Learned-Miller, E. (2015). Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE international conference on computer vision, (pp. 945–953).
Zurück zum Zitat Tatsuma, A., & Aono, M. (2009). Multi-fourier spectra descriptor and augmentation with spectral clustering for 3d shape retrieval. The Visual Computer, 25(8), 785–804.CrossRef Tatsuma, A., & Aono, M. (2009). Multi-fourier spectra descriptor and augmentation with spectral clustering for 3d shape retrieval. The Visual Computer, 25(8), 785–804.CrossRef
Zurück zum Zitat Thomas, H. (2020). Rotation-invariant point convolution with multiple equivariant alignments. In 2020 International Conference on 3D Vision (3DV), (pp. 504–513). Thomas, H. (2020). Rotation-invariant point convolution with multiple equivariant alignments. In 2020 International Conference on 3D Vision (3DV), (pp. 504–513).
Zurück zum Zitat Tombari, F., Salti, S., & Di Stefano, L. (2010). Unique signatures of histograms for local surface description. In Proceedings of the european conference on computer vision, (pp. 356–369). Springer. Tombari, F., Salti, S., & Di Stefano, L. (2010). Unique signatures of histograms for local surface description. In Proceedings of the european conference on computer vision, (pp. 356–369). Springer.
Zurück zum Zitat Uy, M.A., Pham, Q.H., Hua, B.S., Nguyen, D.T., & Yeung, S.K. (2019). Revisiting point cloud classification: a new benchmark dataset and classification model on real-world data. In International conference on computer vision Uy, M.A., Pham, Q.H., Hua, B.S., Nguyen, D.T., & Yeung, S.K. (2019). Revisiting point cloud classification: a new benchmark dataset and classification model on real-world data. In International conference on computer vision
Zurück zum Zitat van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-SNE. Journal of Machine Learning Research, 9(86), 2579–2605. van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-SNE. Journal of Machine Learning Research, 9(86), 2579–2605.
Zurück zum Zitat Wang, P. S., Liu, Y., Guo, Y. X., Sun, C. Y., & Tong, X. (2017). O-cnn: Octree-based convolutional neural networks for 3d shape analysis. ACM Transactions on Graphics, 36(4), 1–11.CrossRef Wang, P. S., Liu, Y., Guo, Y. X., Sun, C. Y., & Tong, X. (2017). O-cnn: Octree-based convolutional neural networks for 3d shape analysis. ACM Transactions on Graphics, 36(4), 1–11.CrossRef
Zurück zum Zitat Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M., & Solomon, J.M. (2019). Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M., & Solomon, J.M. (2019). Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics
Zurück zum Zitat Weiler, M., Hamprecht, F.A., & Storath, M. (2018). Learning steerable filters for rotation equivariant cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (pp. 849–858). Weiler, M., Hamprecht, F.A., & Storath, M. (2018). Learning steerable filters for rotation equivariant cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (pp. 849–858).
Zurück zum Zitat Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., & Xiao, J. (2015a). 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 1912–1920). Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., & Xiao, J. (2015a). 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 1912–1920).
Zurück zum Zitat Wu. Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., & Xiao, J. (2015b). 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 1912–1920). Wu. Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., & Xiao, J. (2015b). 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 1912–1920).
Zurück zum Zitat Xu, Y., Fan, T., Xu, M., Zeng, L., & Qiao, Y. (2018). Spidercnn: Deep learning on point sets with parameterized convolutional filters. In Proceedings of the european conference on computer vision, (pp. 87–102). Xu, Y., Fan, T., Xu, M., Zeng, L., & Qiao, Y. (2018). Spidercnn: Deep learning on point sets with parameterized convolutional filters. In Proceedings of the european conference on computer vision, (pp. 87–102).
Zurück zum Zitat Yi, L., Kim, V. G., Ceylan, D., Shen, I. C., Yan, M., Su, H., Lu, C., Huang, Q., Sheffer, A., & Guibas, L. (2016). A scalable active framework for region annotation in 3d shape collections. ACM Transactions on Graphics, 35(6), 1–12.CrossRef Yi, L., Kim, V. G., Ceylan, D., Shen, I. C., Yan, M., Su, H., Lu, C., Huang, Q., Sheffer, A., & Guibas, L. (2016). A scalable active framework for region annotation in 3d shape collections. ACM Transactions on Graphics, 35(6), 1–12.CrossRef
Zurück zum Zitat Zaharescu, A., Boyer, E., Varanasi, K., & Horaud, R. (2009). Surface feature detection and description with applications to mesh matching. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 373–380). IEEE. Zaharescu, A., Boyer, E., Varanasi, K., & Horaud, R. (2009). Surface feature detection and description with applications to mesh matching. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 373–380). IEEE.
Zurück zum Zitat Zhang, Z., Hua, B.S., Rosen, D.W., Yeung, S.K. (2019a). Rotation invariant convolutions for 3d point clouds deep learning. In International conference on 3D Vision, (pp. 204–213). Zhang, Z., Hua, B.S., Rosen, D.W., Yeung, S.K. (2019a). Rotation invariant convolutions for 3d point clouds deep learning. In International conference on 3D Vision, (pp. 204–213).
Zurück zum Zitat Zhang, Z., Hua, B.S., & Yeung, S.K. (2019b). Shellnet: Efficient point cloud convolutional neural networks using concentric shells statistics. In Proceedings of the IEEE international conference on computer vision, (pp. 1607–1616). Zhang, Z., Hua, B.S., & Yeung, S.K. (2019b). Shellnet: Efficient point cloud convolutional neural networks using concentric shells statistics. In Proceedings of the IEEE international conference on computer vision, (pp. 1607–1616).
Zurück zum Zitat Zhang, Z., Hua, B.S., Chen, W., Tian, Y., & Yeung, S.K. (2020). Global context aware convolutions for 3d point cloud understanding. In International conference on 3D vision Zhang, Z., Hua, B.S., Chen, W., Tian, Y., & Yeung, S.K. (2020). Global context aware convolutions for 3d point cloud understanding. In International conference on 3D vision
Zurück zum Zitat Zhao, Y., Birdal, T., Lenssen, J.E., Menegatti, E., Guibas, L., & Tombari, F. (2020). Quaternion equivariant capsule networks for 3d point clouds. In European conference on computer vision, (pp. 1–19). Springer. Zhao, Y., Birdal, T., Lenssen, J.E., Menegatti, E., Guibas, L., & Tombari, F. (2020). Quaternion equivariant capsule networks for 3d point clouds. In European conference on computer vision, (pp. 1–19). Springer.
Metadaten
Titel
RIConv++: Effective Rotation Invariant Convolutions for 3D Point Clouds Deep Learning
verfasst von
Zhiyuan Zhang
Binh-Son Hua
Sai-Kit Yeung
Publikationsdatum
18.03.2022
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 5/2022
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-022-01601-z

Weitere Artikel der Ausgabe 5/2022

International Journal of Computer Vision 5/2022 Zur Ausgabe

Premium Partner