Skip to main content
Top
Published in: Wireless Personal Communications 2/2018

13-01-2018

A Minimal Dataset Construction Method Based on Similar Training for Capture Position Recognition of Space Robot

Authors: Xiaodong Hu, Xuexiang Huang, Tianjian Hu, Zhong Shi

Published in: Wireless Personal Communications | Issue 2/2018

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Recognizing capture position for non-cooperative targets is an important component of on-orbit service. Traditional machine learning works could not satisfy the requirements of space mission, which demands universality, accuracy and real-time performance. To meet those requirements, an innovative job based on deep learning called Faster Region-based Convolutional Neural Network (Faster RCNN) is introduced for space robot capture position recognizing. Based on the principle of similar training, a minimal dataset construction trick is proposed in order to solve the problem of fewer training samples in space environment. Firstly, the Deep Neural Network is pre-trained through ImageNet training set. Then, using the trained weights as the initial weight of the network, the network is fine-tuned by 1000 training samples in space environment. Finally, a simulation experiment is designed, and the experimental results indicate that the similar training principle can solve the problem of capture position recognition of non-cooperative targets.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Flores-Abad, A., Ma, O., Pham, K., et al. (2014). A review of space robotics technologies for on-orbit servicing. Progress in Aerospace Sciences, 68(8), 1–26.CrossRef Flores-Abad, A., Ma, O., Pham, K., et al. (2014). A review of space robotics technologies for on-orbit servicing. Progress in Aerospace Sciences, 68(8), 1–26.CrossRef
2.
go back to reference Huang, P., Zhang, F., Cai, J., et al. (2017). Dexterous tethered space robot: Design, measurement, control, and experiment. IEEE Transactions on Aerospace and Electronic Systems, 53(3), 1452–1468.CrossRef Huang, P., Zhang, F., Cai, J., et al. (2017). Dexterous tethered space robot: Design, measurement, control, and experiment. IEEE Transactions on Aerospace and Electronic Systems, 53(3), 1452–1468.CrossRef
3.
go back to reference Yu, Z. W., Liu, X. F., & Cai, G. P. (2016). Dynamics modeling and control of a 6-DOF space robot with flexible panels for capturing a free floating target. Acta Astronautica, 128, 560–572.CrossRef Yu, Z. W., Liu, X. F., & Cai, G. P. (2016). Dynamics modeling and control of a 6-DOF space robot with flexible panels for capturing a free floating target. Acta Astronautica, 128, 560–572.CrossRef
4.
go back to reference Chen, L., Huang, P., Cai, J., et al. (2016). A non-cooperative target grasping position prediction model for tethered space robot. Aerospace Science and Technology, 58, 571–581.CrossRef Chen, L., Huang, P., Cai, J., et al. (2016). A non-cooperative target grasping position prediction model for tethered space robot. Aerospace Science and Technology, 58, 571–581.CrossRef
5.
go back to reference Dong, G., & Zhu, Z. H. (2016). Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris. Advances in Space Research, 57(7), 1508–1514.CrossRef Dong, G., & Zhu, Z. H. (2016). Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris. Advances in Space Research, 57(7), 1508–1514.CrossRef
6.
go back to reference Sabatini, M., Monti, R., Gasbarri, P., et al. (2013). Adaptive and robust algorithms and tests for visual-based navigation of a space robotic manipulator. Acta Astronautica, 83, 65–84.CrossRef Sabatini, M., Monti, R., Gasbarri, P., et al. (2013). Adaptive and robust algorithms and tests for visual-based navigation of a space robotic manipulator. Acta Astronautica, 83, 65–84.CrossRef
7.
go back to reference Xu, W., Liang, B., & Xu, C. L. Y. (2010). Autonomous rendezvous and robotic capturing of non-cooperative target in space. Robotica, 28(5), 705–718.CrossRef Xu, W., Liang, B., & Xu, C. L. Y. (2010). Autonomous rendezvous and robotic capturing of non-cooperative target in space. Robotica, 28(5), 705–718.CrossRef
8.
go back to reference Miao, X., Zhu, F., & Hao, Y. (2011). Pose estimation of non-cooperative spacecraft based on collaboration of space-ground and rectangle feature. Proceedings of SPIE—The International Society for Optical Engineering, 8196(3), 1070–1075. Miao, X., Zhu, F., & Hao, Y. (2011). Pose estimation of non-cooperative spacecraft based on collaboration of space-ground and rectangle feature. Proceedings of SPIE—The International Society for Optical Engineering, 8196(3), 1070–1075.
9.
go back to reference Miao, X., & Zhu, F. (2013). Monocular vision pose measurement based on docking ring component. Acta Optica Sinica, 33(4), 0412006.CrossRef Miao, X., & Zhu, F. (2013). Monocular vision pose measurement based on docking ring component. Acta Optica Sinica, 33(4), 0412006.CrossRef
10.
go back to reference Peng, X., Sun, B., & Ali, K. et al. (2016). Learning deep object detectors from 3D models. In IEEE international conference on computer vision (pp. 1278–1286). IEEE. Peng, X., Sun, B., & Ali, K. et al. (2016). Learning deep object detectors from 3D models. In IEEE international conference on computer vision (pp. 1278–1286). IEEE.
11.
go back to reference Peng, X., Sun, B., & Ali, K. et al. (2014). Exploring invariances in deep convolutional neural networks using synthetic images. Eprint Arxiv, pp. 1278–1286. Peng, X., Sun, B., & Ali, K. et al. (2014). Exploring invariances in deep convolutional neural networks using synthetic images. Eprint Arxiv, pp. 1278–1286.
12.
go back to reference Su, H., Qi, C. R., & Li, Y. et al. (2016). Render for CNN: Viewpoint estimation in images using CNNs trained with rendered 3D model views. In IEEE international conference on computer vision (pp. 2686–2694). IEEE. Su, H., Qi, C. R., & Li, Y. et al. (2016). Render for CNN: Viewpoint estimation in images using CNNs trained with rendered 3D model views. In IEEE international conference on computer vision (pp. 2686–2694). IEEE.
13.
go back to reference Alfriend, K. T., Lee, D. J., & Creamer, N. G. (2012). Optimal servicing of geosynchronous satellites. Journal of Guidance, Control and Dynamics, 29(1), 203–206.CrossRef Alfriend, K. T., Lee, D. J., & Creamer, N. G. (2012). Optimal servicing of geosynchronous satellites. Journal of Guidance, Control and Dynamics, 29(1), 203–206.CrossRef
14.
go back to reference Kamon, I., Flash, T., & Edelman, S. (1994). Learning to grasp using visual information. Rehovot: Weizmann Science Press of Israel. Kamon, I., Flash, T., & Edelman, S. (1994). Learning to grasp using visual information. Rehovot: Weizmann Science Press of Israel.
15.
go back to reference Morales, A., Chinellato, E., Fagg, A. H., et al. (2004). Using experience for assessing grasp reliability. International Journal of Humanoid Robotics, 1(04), 671–691.CrossRef Morales, A., Chinellato, E., Fagg, A. H., et al. (2004). Using experience for assessing grasp reliability. International Journal of Humanoid Robotics, 1(04), 671–691.CrossRef
16.
go back to reference El-Khoury, S., & Sahbani, A. (2008). Handling objects by their handles. ZWR, 85(3), 130–132. El-Khoury, S., & Sahbani, A. (2008). Handling objects by their handles. ZWR, 85(3), 130–132.
17.
go back to reference Lenz, I., Lee, H., & Saxena, A. (2013). Deep learning for detecting robotic grasps. International Journal of Robotics Research, 34(4–5), 705–724. Lenz, I., Lee, H., & Saxena, A. (2013). Deep learning for detecting robotic grasps. International Journal of Robotics Research, 34(4–5), 705–724.
18.
go back to reference Jiang, H., & Learned-Miller, E. (2017). Face detection with the faster R-CNN. In IEEE international conference on automatic face and gesture recognition (pp. 650–657). IEEE. Jiang, H., & Learned-Miller, E. (2017). Face detection with the faster R-CNN. In IEEE international conference on automatic face and gesture recognition (pp. 650–657). IEEE.
19.
go back to reference Zhao, X., Li, W., & Zhang, Y. et al. (2017). A faster RCNN-based pedestrian detection system. In Vehicular technology conference (pp. 1–5). IEEE. Zhao, X., Li, W., & Zhang, Y. et al. (2017). A faster RCNN-based pedestrian detection system. In Vehicular technology conference (pp. 1–5). IEEE.
20.
go back to reference Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In International conference on neural information processing systems (pp. 1097–1105). Curran Associates Inc. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In International conference on neural information processing systems (pp. 1097–1105). Curran Associates Inc.
21.
go back to reference Karpathy, A., Toderici, G., & Shetty, S. et al. (2014). Large-scale video classification with convolutional neural networks. In IEEE conference on computer vision and pattern recognition (pp. 1725–1732). IEEE Computer Society. Karpathy, A., Toderici, G., & Shetty, S. et al. (2014). Large-scale video classification with convolutional neural networks. In IEEE conference on computer vision and pattern recognition (pp. 1725–1732). IEEE Computer Society.
22.
go back to reference Levi, G., Hassncer, T. (2015). Age and gender classification using convolutional neural networks. In Computer vision and pattern recognition workshops (pp. 34–42). IEEE. Levi, G., Hassncer, T. (2015). Age and gender classification using convolutional neural networks. In Computer vision and pattern recognition workshops (pp. 34–42). IEEE.
Metadata
Title
A Minimal Dataset Construction Method Based on Similar Training for Capture Position Recognition of Space Robot
Authors
Xiaodong Hu
Xuexiang Huang
Tianjian Hu
Zhong Shi
Publication date
13-01-2018
Publisher
Springer US
Published in
Wireless Personal Communications / Issue 2/2018
Print ISSN: 0929-6212
Electronic ISSN: 1572-834X
DOI
https://doi.org/10.1007/s11277-018-5247-y

Other articles of this Issue 2/2018

Wireless Personal Communications 2/2018 Go to the issue