Skip to main content
Erschienen in: The International Journal of Advanced Manufacturing Technology 4/2020

16.04.2020 | ORIGINAL ARTICLE

Grasping pose estimation for SCARA robot based on deep learning of point cloud

verfasst von: Zhengtuo Wang, Yuetong Xu, Quan He, Zehua Fang, Guanhua Xu, Jianzhong Fu

Erschienen in: The International Journal of Advanced Manufacturing Technology | Ausgabe 4/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

With the development of 3D measurement technology, 3D vision sensors and object pose estimation methods have been developed for robotic loading and unloading. In this work, an end-to-end deep learning method on point clouds, PointNetRGPE, is proposed to estimating the grasping pose of SCARA robot. In PointNetRGPE model, the point cloud and class number are fused into a point-class vector, and several PointNet-like networks are used to estimate the robot grasping pose, containing 3D translation and 1D rotation. Considering that rotational symmetry is very common in man-made and industrial environments, a novel architecture is introduced into PointNetRGPE to solve the pose estimation problem with rotational symmetry in the z-axis direction. Additionally, an experimental platform is built containing an industrial robot and a binocular stereo vision system, and a dataset with three subsets is set up. Finally, the PointNetRGPE is tested on the dataset, and the success rates of three subsets are 98.89%, 98.89%, and 94.44% respectively.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Hinterstoisser S, Cagniart C, Ilic S, Sturm P, Navab N, Fua P, Lepetit V (2012) Gradient response maps for real-time detection of textureless objects. IEEE Trans Pattern Anal Mach Intell 34(5):876–888CrossRef Hinterstoisser S, Cagniart C, Ilic S, Sturm P, Navab N, Fua P, Lepetit V (2012) Gradient response maps for real-time detection of textureless objects. IEEE Trans Pattern Anal Mach Intell 34(5):876–888CrossRef
2.
Zurück zum Zitat Yang H, Huang C, Wang F, Song K, Zheng S, Yin Z (2019) Large-scale and rotation-invariant template matching using adaptive radial ring code histograms. Pattern Recogn 91:345–356CrossRef Yang H, Huang C, Wang F, Song K, Zheng S, Yin Z (2019) Large-scale and rotation-invariant template matching using adaptive radial ring code histograms. Pattern Recogn 91:345–356CrossRef
3.
Zurück zum Zitat He Z, Jiang Z, Zhao X, Zhang S, Wu C (2020) Sparse template-based 6-D pose estimation of metal parts using a monocular camera. IEEE Trans Ind Electron 67(1):390–401CrossRef He Z, Jiang Z, Zhao X, Zhang S, Wu C (2020) Sparse template-based 6-D pose estimation of metal parts using a monocular camera. IEEE Trans Ind Electron 67(1):390–401CrossRef
4.
Zurück zum Zitat Wang Y, Zhang S, Yang S, He W, Bai X, Zeng Y (2017) A LINE-MOD-based markerless tracking approach for AR applications. Int J Adv Manuf Technol 89(5):1699–1707CrossRef Wang Y, Zhang S, Yang S, He W, Bai X, Zeng Y (2017) A LINE-MOD-based markerless tracking approach for AR applications. Int J Adv Manuf Technol 89(5):1699–1707CrossRef
5.
Zurück zum Zitat Luo H, Zhu L, Ding H (2007) An industrial solution to object pose estimation for automatic semiconductor fabrication. Int J Adv Manuf Technol 32(9):969–977CrossRef Luo H, Zhu L, Ding H (2007) An industrial solution to object pose estimation for automatic semiconductor fabrication. Int J Adv Manuf Technol 32(9):969–977CrossRef
6.
Zurück zum Zitat Hoseini SA, Kabiri P (2018) A novel feature-based approach for indoor monocular SLAM. Electronics 7(11) Hoseini SA, Kabiri P (2018) A novel feature-based approach for indoor monocular SLAM. Electronics 7(11)
7.
Zurück zum Zitat Fu J, Pertuz S, Matas J, Kamarainen J-K (2019) Performance analysis of single-query 6-DoF camera pose estimation in self-driving setups. Comput Vis Image Underst 186:58–73CrossRef Fu J, Pertuz S, Matas J, Kamarainen J-K (2019) Performance analysis of single-query 6-DoF camera pose estimation in self-driving setups. Comput Vis Image Underst 186:58–73CrossRef
8.
Zurück zum Zitat Wang R, Di K, Wan W, Wang Y (2018) Improved point-line feature based visual SLAM method for indoor scenes. Sensors 18(10) Wang R, Di K, Wan W, Wang Y (2018) Improved point-line feature based visual SLAM method for indoor scenes. Sensors 18(10)
9.
Zurück zum Zitat Chang W-C, Wu C-H (2016) Eye-in-hand vision-based robotic bin-picking with active laser projection. Int J Adv Manuf Technol 85(9):2873–2885CrossRef Chang W-C, Wu C-H (2016) Eye-in-hand vision-based robotic bin-picking with active laser projection. Int J Adv Manuf Technol 85(9):2873–2885CrossRef
10.
Zurück zum Zitat Wang Z, Fan J, Jing F, Liu Z, Tan M (2019) A pose estimation system based on deep neural network and ICP registration for robotic spray painting application. Int J Adv Manuf Technol 104(1):285–299CrossRef Wang Z, Fan J, Jing F, Liu Z, Tan M (2019) A pose estimation system based on deep neural network and ICP registration for robotic spray painting application. Int J Adv Manuf Technol 104(1):285–299CrossRef
11.
Zurück zum Zitat Xiang Y, Schmidt T, Narayanan V, Fox D (2017) PoseCNN: a convolutional neural network for 6D object pose estimation in cluttered scenes, ArXiv Prepr. arXiv1711.00199 Xiang Y, Schmidt T, Narayanan V, Fox D (2017) PoseCNN: a convolutional neural network for 6D object pose estimation in cluttered scenes, ArXiv Prepr. arXiv1711.00199
12.
Zurück zum Zitat Li C-HG, Chang Y-M (2019) Automated visual positioning and precision placement of a workpiece using deep learning. Int J Adv Manuf Technol 104(9):4527–4538CrossRef Li C-HG, Chang Y-M (2019) Automated visual positioning and precision placement of a workpiece using deep learning. Int J Adv Manuf Technol 104(9):4527–4538CrossRef
13.
Zurück zum Zitat Li C, Bai J, Hager GD (2018) A unified framework for multi-view multi-class object pose estimation, ArXiv Prepr. arXiv1803.08103 Li C, Bai J, Hager GD (2018) A unified framework for multi-view multi-class object pose estimation, ArXiv Prepr. arXiv1803.08103
14.
Zurück zum Zitat Wang C et al (2019) DenseFusion: 6D object pose estimation by iterative dense fusion,” ArXiv Prepr. arXiv1901.04780 Wang C et al (2019) DenseFusion: 6D object pose estimation by iterative dense fusion,” ArXiv Prepr. arXiv1901.04780
15.
Zurück zum Zitat ten Pas A, Platt R (2018) Using geometry to detect grasp poses in 3D point clouds, in Robotics Research, vol 1, vol. 2, pp. 307–324 ten Pas A, Platt R (2018) Using geometry to detect grasp poses in 3D point clouds, in Robotics Research, vol 1, vol. 2, pp. 307–324
16.
Zurück zum Zitat ten Pas A, Gualtieri M, Saenko K, Platt R (2017) Grasp pose detection in point clouds. Int J Rob Res 36, no. 13–14(SI):1455–1473 ten Pas A, Gualtieri M, Saenko K, Platt R (2017) Grasp pose detection in point clouds. Int J Rob Res 36, no. 13–14(SI):1455–1473
17.
Zurück zum Zitat Mahler J et al (2017) Dex-Net 2.0: deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics, ArXiv Prepr. arXiv1703.09312 Mahler J et al (2017) Dex-Net 2.0: deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics, ArXiv Prepr. arXiv1703.09312
18.
Zurück zum Zitat Liang H et al (2019) PointNetGPD: detecting grasp configurations from point sets, In: 2019 International Conference on Robotics and Automation (ICRA), p 3629–3635 Liang H et al (2019) PointNetGPD: detecting grasp configurations from point sets, In: 2019 International Conference on Robotics and Automation (ICRA), p 3629–3635
19.
Zurück zum Zitat Rusu RB, Cousins S (2011) 3D is here: Point Cloud Library (PCL), In: IEEE International Conference on Robotics and Automation (ICRA), Rusu RB, Cousins S (2011) 3D is here: Point Cloud Library (PCL), In: IEEE International Conference on Robotics and Automation (ICRA),
20.
Zurück zum Zitat Qi CR, Su H, Mo K, Guibas LJ (2017) PointNet: deep learning on point sets for 3D classification and segmentation,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, p 77–85 Qi CR, Su H, Mo K, Guibas LJ (2017) PointNet: deep learning on point sets for 3D classification and segmentation,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, p 77–85
21.
Zurück zum Zitat Corona E, Kundu K, Fidler S (2018) Pose estimation for objects with rotational symmetry. IEEE Int Conf Intell Robot Syst:7215–7222 Corona E, Kundu K, Fidler S (2018) Pose estimation for objects with rotational symmetry. IEEE Int Conf Intell Robot Syst:7215–7222
22.
Zurück zum Zitat Abadi M et al (2016) TensorFlow: a system for large-scale machine learning, ArXiv Prepr. arXiv1605.08695 Abadi M et al (2016) TensorFlow: a system for large-scale machine learning, ArXiv Prepr. arXiv1605.08695
23.
Zurück zum Zitat Zhou Q-Y, Park J, Koltun V (2018) Open3D: a modern library for 3D data processing. ArXiv Prepr. arXiv1801.09847 Zhou Q-Y, Park J, Koltun V (2018) Open3D: a modern library for 3D data processing. ArXiv Prepr. arXiv1801.09847
24.
Zurück zum Zitat Mian A, Bennamoun M, Owens R (2010) On the repeatability and quality of keypoints for local feature-based 3D object retrieval from cluttered scenes. Int J Comput Vis 89, no. 2–3(SI):348–361CrossRef Mian A, Bennamoun M, Owens R (2010) On the repeatability and quality of keypoints for local feature-based 3D object retrieval from cluttered scenes. Int J Comput Vis 89, no. 2–3(SI):348–361CrossRef
25.
Zurück zum Zitat Aldoma A, Tombari F, Rusu RB, Vincze M (2012) OUR-CVFH -oriented, unique and repeatable clustered viewpoint feature histogram for object recognition and 6DOF pose estimation,” in Pattern Recognition. Proceedings Joint 34th DAGM and 36th OAGM Symposium, p 113–122 Aldoma A, Tombari F, Rusu RB, Vincze M (2012) OUR-CVFH -oriented, unique and repeatable clustered viewpoint feature histogram for object recognition and 6DOF pose estimation,” in Pattern Recognition. Proceedings Joint 34th DAGM and 36th OAGM Symposium, p 113–122
Metadaten
Titel
Grasping pose estimation for SCARA robot based on deep learning of point cloud
verfasst von
Zhengtuo Wang
Yuetong Xu
Quan He
Zehua Fang
Guanhua Xu
Jianzhong Fu
Publikationsdatum
16.04.2020
Verlag
Springer London
Erschienen in
The International Journal of Advanced Manufacturing Technology / Ausgabe 4/2020
Print ISSN: 0268-3768
Elektronische ISSN: 1433-3015
DOI
https://doi.org/10.1007/s00170-020-05257-2

Weitere Artikel der Ausgabe 4/2020

The International Journal of Advanced Manufacturing Technology 4/2020 Zur Ausgabe

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.