Skip to main content
Top

2022 | OriginalPaper | Chapter

A Novel Grasping Approach with Dynamic Annotation Mechanism

Authors : Shuai Yang, Bin Wang, Junyuan Tao, Qifan Duan, Hong Liu

Published in: Intelligent Robotics and Applications

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The Grasping of unknown objects is a challenging but critical problem in the field of robotic research. However, existing studies only focus on the shape of objects and ignore the impact of the differences in robot systems which has a vital influence on the completion of grasping tasks. In this work, we present a novel grasping approach with a dynamic annotation mechanism to address the problem, which includes a grasping dataset and a grasping detection network. The dataset provides two annotations named basic and decent annotation respectively, and the former can be transformed to the latter according to mechanical parameters of antipodal grippers and absolute positioning accuracies of robots. So that we take the characters of the robot system into account. Meanwhile, a new evaluation metric is presented to provide reliable assessments for the predicted grasps. The proposed grasping detection network is a fully convolutional network that can generate robust grasps for robots. In addition, evaluations based on datasets and experiments on a real robot show the effectiveness of our approach.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Yun, J., Moseson, S., Saxena, A.: Efficient grasping from RGBD images: learning using a new rectangle representation. In: 2011 IEEE International Conference on Robotics and Automation, pp. 3304–3311. IEEE, Shanghai, China (2011) Yun, J., Moseson, S., Saxena, A.: Efficient grasping from RGBD images: learning using a new rectangle representation. In: 2011 IEEE International Conference on Robotics and Automation, pp. 3304–3311. IEEE, Shanghai, China (2011)
2.
go back to reference Mahler, J., et al.: Dex-net 2.0: deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. In: Robotics: Science and Systems, MIT Press, Massachusetts, USA (2017) Mahler, J., et al.: Dex-net 2.0: deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. In: Robotics: Science and Systems, MIT Press, Massachusetts, USA (2017)
3.
go back to reference Fang, H.S., Wang, C., Gou M., Lu, C.: GraspNet-1Billion: a large-scale benchmark for general object grasping. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11441–11450. IEEE, Seattle, WA, USA (2020) Fang, H.S., Wang, C., Gou M., Lu, C.: GraspNet-1Billion: a large-scale benchmark for general object grasping. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11441–11450. IEEE, Seattle, WA, USA (2020)
4.
go back to reference Morrison, D., Corke, P., Leitner, J.: Closing the loop for robotic grasping: a real-time, generative grasp synthesis approach. In: Robotics: Science and Systems. MIT Press, Pitts-burgh, Pennsylvania, USA (2018) Morrison, D., Corke, P., Leitner, J.: Closing the loop for robotic grasping: a real-time, generative grasp synthesis approach. In: Robotics: Science and Systems. MIT Press, Pitts-burgh, Pennsylvania, USA (2018)
5.
go back to reference Kumra, S., Joshi, S., Sahin, F.: Antipodal robotic grasping using generative residual convolutional neural network. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 9626–9633. IEEE, Las Vegas, USA (2020) Kumra, S., Joshi, S., Sahin, F.: Antipodal robotic grasping using generative residual convolutional neural network. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 9626–9633. IEEE, Las Vegas, USA (2020)
6.
go back to reference Redmon, J., Angelova, A.: Real-time grasp detection using convolutional neural networks. In: 2015 IEEE International Conference on Robotics and Automation, pp. 1316–1322. IEEE, Seattle, WA, USA (2015) Redmon, J., Angelova, A.: Real-time grasp detection using convolutional neural networks. In: 2015 IEEE International Conference on Robotics and Automation, pp. 1316–1322. IEEE, Seattle, WA, USA (2015)
7.
go back to reference Chu, F.J., Xu, R., Patricio, V.: Real-world multiobject, multigrasp detection. IEEE Robot. Autom. Lett. 3(4), 3355–3362 (2018)CrossRef Chu, F.J., Xu, R., Patricio, V.: Real-world multiobject, multigrasp detection. IEEE Robot. Autom. Lett. 3(4), 3355–3362 (2018)CrossRef
8.
go back to reference Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2016)CrossRef Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2016)CrossRef
9.
go back to reference Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: UNet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39(6), 1856–1867 (2020)CrossRef Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: UNet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39(6), 1856–1867 (2020)CrossRef
10.
go back to reference Huang, G., Liu, Z., Laurens, V., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2261–2269. IEEE, Honolulu, HI, USA (2017) Huang, G., Liu, Z., Laurens, V., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2261–2269. IEEE, Honolulu, HI, USA (2017)
11.
go back to reference Le, Q.V., Kamm, D., Kara, A.F., Ng, A.Y.: Learning to grasp objects with multiple contact points. In: 2010 IEEE International Conference on Robotics and Automation, pp. 5062–5069. IEEE, Anchorage, AK, USA (2010) Le, Q.V., Kamm, D., Kara, A.F., Ng, A.Y.: Learning to grasp objects with multiple contact points. In: 2010 IEEE International Conference on Robotics and Automation, pp. 5062–5069. IEEE, Anchorage, AK, USA (2010)
12.
go back to reference Depierre, A., Dellandrea, E., Chen, L.: Jacquard: a large scale dataset for robotic grasp detection. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3511–3516. IEEE, Madrid, Spain (2018) Depierre, A., Dellandrea, E., Chen, L.: Jacquard: a large scale dataset for robotic grasp detection. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3511–3516. IEEE, Madrid, Spain (2018)
13.
go back to reference Chang, A.X., et al.: ShapeNet: an information-rich 3D model repository Computer Science. CoRR abs/1512.03012 (2015) Chang, A.X., et al.: ShapeNet: an information-rich 3D model repository Computer Science. CoRR abs/1512.03012 (2015)
14.
go back to reference Mahler, J., Matl, M., Liu, X., Li, A., Gealy, D., Goldberg, K.: Dex-Net 3.0: computing robust vacuum suction grasp targets in point clouds using a new analytic model and deep learning. In: 2018 International Conference on Robotics and Automation, pp.5620–5627. IEEE, Brisbane, QLD, Australia (2018) Mahler, J., Matl, M., Liu, X., Li, A., Gealy, D., Goldberg, K.: Dex-Net 3.0: computing robust vacuum suction grasp targets in point clouds using a new analytic model and deep learning. In: 2018 International Conference on Robotics and Automation, pp.5620–5627. IEEE, Brisbane, QLD, Australia (2018)
15.
go back to reference Cao, H., Fang, H.S., Liu, W., Lu, C.: SuctionNet-1Billion: a large-scale benchmark for suction grasping. IEEE Robot. Autom. Lett. 6(4), 8718–8725 (2021)CrossRef Cao, H., Fang, H.S., Liu, W., Lu, C.: SuctionNet-1Billion: a large-scale benchmark for suction grasping. IEEE Robot. Autom. Lett. 6(4), 8718–8725 (2021)CrossRef
16.
go back to reference Kumra, S., Kanan, C.: Robotic grasp detection using deep convolutional neural networks. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 769–776. IEEE, Vancouver, BC, Canada (2017) Kumra, S., Kanan, C.: Robotic grasp detection using deep convolutional neural networks. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 769–776. IEEE, Vancouver, BC, Canada (2017)
17.
go back to reference Zhang, H., Lan, X., Bai, S., Zhou, X., Tian, Z., Zheng, N.: ROI-based robotic grasp detection for object overlapping scenes. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4768–4775. IEEE, Macau, China (2019) Zhang, H., Lan, X., Bai, S., Zhou, X., Tian, Z., Zheng, N.: ROI-based robotic grasp detection for object overlapping scenes. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4768–4775. IEEE, Macau, China (2019)
18.
go back to reference Wu, G., Chen, W., Cheng, H., Zuo, W., Zhang, D., You, J.: Multi-object grasping detection with hierarchical feature fusion. IEEE Access 7, 43884–43894 (2019)CrossRef Wu, G., Chen, W., Cheng, H., Zuo, W., Zhang, D., You, J.: Multi-object grasping detection with hierarchical feature fusion. IEEE Access 7, 43884–43894 (2019)CrossRef
19.
go back to reference Gou, M., Fang, H.S., Zhu, Z., Xu, S., Wang, C., Lu, C.: RGB matters: learning 7-DoF grasp poses on monocular RGBD images. In: 2021 IEEE International Conference on Robotics and Automation, pp. 13459–13466. IEEE, Xi’an, China (2021) Gou, M., Fang, H.S., Zhu, Z., Xu, S., Wang, C., Lu, C.: RGB matters: learning 7-DoF grasp poses on monocular RGBD images. In: 2021 IEEE International Conference on Robotics and Automation, pp. 13459–13466. IEEE, Xi’an, China (2021)
20.
go back to reference Pas, A.T., Gualtieri, M., Saenko, K., Platt, R.: Grasp pose detection in point clouds. Int. J. Robot. Res. 36(13–14), 1455–1473 (2017) Pas, A.T., Gualtieri, M., Saenko, K., Platt, R.: Grasp pose detection in point clouds. Int. J. Robot. Res. 36(13–14), 1455–1473 (2017)
21.
go back to reference Liang, H., et al.: PointNetGPD: detecting grasp configurations from point sets. In: 2019 International Conference on Robotics and Automation, pp. 3629–3635. IEEE, Montreal, QC, Canada (2019) Liang, H., et al.: PointNetGPD: detecting grasp configurations from point sets. In: 2019 International Conference on Robotics and Automation, pp. 3629–3635. IEEE, Montreal, QC, Canada (2019)
22.
go back to reference Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2017)CrossRef Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2017)CrossRef
23.
go back to reference Cao, J., Anwer, R.M., Cholakkal, H., Khan, F.S., Pang, Y., Shao, L.: SipMask: spatial information preservation for fast image and video instance segmentation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12359, pp. 1–18. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58568-6_1CrossRef Cao, J., Anwer, R.M., Cholakkal, H., Khan, F.S., Pang, Y., Shao, L.: SipMask: spatial information preservation for fast image and video instance segmentation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12359, pp. 1–18. Springer, Cham (2020). https://​doi.​org/​10.​1007/​978-3-030-58568-6_​1CrossRef
24.
go back to reference Xie, E., Wang, W., Yu, Z., Anandkumar, A., A, J.M., Luo, P.: SegFormer: simple and effi-cient design for semantic segmentation with transformers. In: 35th Conference on Neural Information Processing Systems, pp. 12077–12090. MIT Press, Virtual Conference (2021) Xie, E., Wang, W., Yu, Z., Anandkumar, A., A, J.M., Luo, P.: SegFormer: simple and effi-cient design for semantic segmentation with transformers. In: 35th Conference on Neural Information Processing Systems, pp. 12077–12090. MIT Press, Virtual Conference (2021)
26.
go back to reference He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778. IEEE, Las Vegas, NV, USA (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778. IEEE, Las Vegas, NV, USA (2016)
28.
go back to reference Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1490.1556 (2014) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1490.1556 (2014)
29.
go back to reference Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: 31th AAAI Conference on Artificial Intelligence, pp. 4278–4284. AAAI Press, San Francisco California USA (2017) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: 31th AAAI Conference on Artificial Intelligence, pp. 4278–4284. AAAI Press, San Francisco California USA (2017)
Metadata
Title
A Novel Grasping Approach with Dynamic Annotation Mechanism
Authors
Shuai Yang
Bin Wang
Junyuan Tao
Qifan Duan
Hong Liu
Copyright Year
2022
DOI
https://doi.org/10.1007/978-3-031-13844-7_5

Premium Partner