Skip to main content
Erschienen in: Autonomous Robots 2/2018

06.02.2017

Contour-based next-best view planning from point cloud segmentation of unknown objects

verfasst von: Riccardo Monica, Jacopo Aleotti

Erschienen in: Autonomous Robots | Ausgabe 2/2018

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

A novel strategy is presented to determine the next-best view for a robot arm, equipped with a depth camera in eye-in-hand configuration, which is oriented to autonomous exploration of unknown objects. Instead of maximizing the total size of the expected unknown volume that becomes visible, the next-best view is chosen to observe the border of incomplete objects. Salient regions of space that belong to the objects are detected, without any prior knowledge, by applying a point cloud segmentation algorithm. The system uses a Kinect V2 sensor, which has not been considered in previous works on next-best view planning, and it exploits KinectFusion to maintain a volumetric representation of the environment. A low-level procedure to reduce Kinect V2 invalid points is also presented. The viability of the approach has been demonstrated in a real setup where the robot is fully autonomous. Experiments indicate that the proposed method enables the robot to actively explore the objects faster than a standard next-best view algorithm.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
Zurück zum Zitat Atanasov, N., Sankaran, B., Le Ny, J., Pappas, G. J., & Daniilidis, K. (2014). Nonmyopic view planning for active object classification and pose estimation. IEEE Transactions on Robotics, 30(5), 1078–1090.CrossRef Atanasov, N., Sankaran, B., Le Ny, J., Pappas, G. J., & Daniilidis, K. (2014). Nonmyopic view planning for active object classification and pose estimation. IEEE Transactions on Robotics, 30(5), 1078–1090.CrossRef
Zurück zum Zitat Banta, J. E., Wong, L. R., Dumont, C., & Abidi, M. A. (2000). A next-best-view system for autonomous 3-D object reconstruction. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 30(5), 589–598.CrossRef Banta, J. E., Wong, L. R., Dumont, C., & Abidi, M. A. (2000). A next-best-view system for autonomous 3-D object reconstruction. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 30(5), 589–598.CrossRef
Zurück zum Zitat Beale, D., Iravani, P., & Hall, P. (2011). Probabilistic models for robot-based object segmentation. Robotics and Autonomous Systems, 59(12), 1080–1089.CrossRef Beale, D., Iravani, P., & Hall, P. (2011). Probabilistic models for robot-based object segmentation. Robotics and Autonomous Systems, 59(12), 1080–1089.CrossRef
Zurück zum Zitat Chen, S. Y., & Li, Y. F. (2005). Vision sensor planning for 3-D model acquisition. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 35(5), 894–904.CrossRef Chen, S. Y., & Li, Y. F. (2005). Vision sensor planning for 3-D model acquisition. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 35(5), 894–904.CrossRef
Zurück zum Zitat Connolly, C. (1985). The determination of next best views. IEEE International Conference on Robotics and Automation (ICRA), 2, 432–435. Connolly, C. (1985). The determination of next best views. IEEE International Conference on Robotics and Automation (ICRA), 2, 432–435.
Zurück zum Zitat Drews, P., Núñez, P., Rocha, R., Campos, M., & Dias, J. (2013). Novelty detection and segmentation based on gaussian mixture models: A case study in 3D robotic laser mapping. Robotics and Autonomous Systems, 61(12), 1696–1709.CrossRef Drews, P., Núñez, P., Rocha, R., Campos, M., & Dias, J. (2013). Novelty detection and segmentation based on gaussian mixture models: A case study in 3D robotic laser mapping. Robotics and Autonomous Systems, 61(12), 1696–1709.CrossRef
Zurück zum Zitat Finman, R., Whelan, T., Kaess, M., & Leonard, J. J. (2013). Toward lifelong object segmentation from change detection in dense RGB-D maps. In European conference on mobile robots (ECMR), pp. 178–185. Finman, R., Whelan, T., Kaess, M., & Leonard, J. J. (2013). Toward lifelong object segmentation from change detection in dense RGB-D maps. In European conference on mobile robots (ECMR), pp. 178–185.
Zurück zum Zitat Foix, S., Alenyà, G., Andrade-Cetto, J., & Torras, C. (2010). Object modeling using a ToF camera under an uncertainty reduction approach. In IEEE International conference on robotics and automation (ICRA), pp. 1306–1312. Foix, S., Alenyà, G., Andrade-Cetto, J., & Torras, C. (2010). Object modeling using a ToF camera under an uncertainty reduction approach. In IEEE International conference on robotics and automation (ICRA), pp. 1306–1312.
Zurück zum Zitat Herbst, E., Henry, P., & Fox, D. (2014). Toward online 3-D object segmentation and mapping. In IEEE International conference on robotics and automation (ICRA), pp. 3193–3200. Herbst, E., Henry, P., & Fox, D. (2014). Toward online 3-D object segmentation and mapping. In IEEE International conference on robotics and automation (ICRA), pp. 3193–3200.
Zurück zum Zitat Kahn, G., Sujan, P., Patil, S., Bopardikar, S., Ryde, J., Goldberg, K., et al. (2015). Active exploration using trajectory optimization for robotic grasping in the presence of occlusions. In IEEE international conference on robotics and automation (ICRA), pp. 4783–4790. Kahn, G., Sujan, P., Patil, S., Bopardikar, S., Ryde, J., Goldberg, K., et al. (2015). Active exploration using trajectory optimization for robotic grasping in the presence of occlusions. In IEEE international conference on robotics and automation (ICRA), pp. 4783–4790.
Zurück zum Zitat Kriegel, S. Bodenmuller, T., Suppa, M., & Hirzinger, G. (2011). A surface-based next-best-view approach for automated 3D model completion of unknown objects. In IEEE international conference on robotics and automation (ICRA), pp. 4869–4874. Kriegel, S. Bodenmuller, T., Suppa, M., & Hirzinger, G. (2011). A surface-based next-best-view approach for automated 3D model completion of unknown objects. In IEEE international conference on robotics and automation (ICRA), pp. 4869–4874.
Zurück zum Zitat Kriegel, S., Rink, C., Bodenmuller, T., Narr, A., Suppa, M., & Hirzinger, G. (2012). Next-best-scan planning for autonomous 3D modeling. In IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 2850–2856. Kriegel, S., Rink, C., Bodenmuller, T., Narr, A., Suppa, M., & Hirzinger, G. (2012). Next-best-scan planning for autonomous 3D modeling. In IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 2850–2856.
Zurück zum Zitat Kriegel, S., Brucker, M., Marton, Z. C, Bodenmuller, T., & Suppa, M. (2013). Combining object modeling and recognition for active scene exploration. In IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 2384–2391. Kriegel, S., Brucker, M., Marton, Z. C, Bodenmuller, T., & Suppa, M. (2013). Combining object modeling and recognition for active scene exploration. In IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 2384–2391.
Zurück zum Zitat Li, Y. F., & Liu, Z. G. (2005). Information entropy-based viewpoint planning for 3-D object reconstruction. IEEE Transactions on Robotics, 21(3), 324–337.CrossRef Li, Y. F., & Liu, Z. G. (2005). Information entropy-based viewpoint planning for 3-D object reconstruction. IEEE Transactions on Robotics, 21(3), 324–337.CrossRef
Zurück zum Zitat Liu, S., Wang, Y., Wang, J., Wang, H., Zhang, J., & Pan, C. (2013). Kinect depth restoration via energy minimization with TV21 regularization. In IEEE international conference on image processing (ICIP), pp. 724–724. Liu, S., Wang, Y., Wang, J., Wang, H., Zhang, J., & Pan, C. (2013). Kinect depth restoration via energy minimization with TV21 regularization. In IEEE international conference on image processing (ICIP), pp. 724–724.
Zurück zum Zitat Monica, R., Aleotti, J., & Caselli, S. (2016). A KinFu based approach for robot spatial attention and view planning. Robotics and Autonomous Systems, 75(Part B), 627–640.CrossRef Monica, R., Aleotti, J., & Caselli, S. (2016). A KinFu based approach for robot spatial attention and view planning. Robotics and Autonomous Systems, 75(Part B), 627–640.CrossRef
Zurück zum Zitat Morooka, K., Zha, Hongbin, & Hasegawa, T. (1998). Next best viewpoint (NBV) planning for active object modeling based on a learning-by-showing approach. In Fourteenth international conference on pattern recognition, Vol. 1, pp. 677–681. Morooka, K., Zha, Hongbin, & Hasegawa, T. (1998). Next best viewpoint (NBV) planning for active object modeling based on a learning-by-showing approach. In Fourteenth international conference on pattern recognition, Vol. 1, pp. 677–681.
Zurück zum Zitat Newcombe, R. A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A. J., et al. (2011). KinectFusion: Real-time dense surface mapping and tracking. In IEEE international symposium on mixed and augmented reality (ISMAR), pp. 127–136. Newcombe, R. A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A. J., et al. (2011). KinectFusion: Real-time dense surface mapping and tracking. In IEEE international symposium on mixed and augmented reality (ISMAR), pp. 127–136.
Zurück zum Zitat Orabona, F., Metta, G., & Sandini, G. (2005). Object-based visual attention: a model for a behaving robot. In IEEE computer society conference on computer vision and pattern recognition (CVPR), pp. 89–89. Orabona, F., Metta, G., & Sandini, G. (2005). Object-based visual attention: a model for a behaving robot. In IEEE computer society conference on computer vision and pattern recognition (CVPR), pp. 89–89.
Zurück zum Zitat Papon, J., Abramov, A., Schoeler, M., & Worgotter, F. (2013). Voxel cloud connectivity segmentation - supervoxels for point clouds. In IEEE computer society conference on computer vision and pattern recognition, pp. 2027–2034. doi:10.1109/CVPR.2013.264. Papon, J., Abramov, A., Schoeler, M., & Worgotter, F. (2013). Voxel cloud connectivity segmentation - supervoxels for point clouds. In IEEE computer society conference on computer vision and pattern recognition, pp. 2027–2034. doi:10.​1109/​CVPR.​2013.​264.
Zurück zum Zitat Patten, T., Zillich, M., Fitch, R., Vincze, M., & Sukkarieh, S. (2016). Viewpoint evaluation for online 3-D active object classification. IEEE Robotics and Automation Letters, 1(1), 73–81.CrossRef Patten, T., Zillich, M., Fitch, R., Vincze, M., & Sukkarieh, S. (2016). Viewpoint evaluation for online 3-D active object classification. IEEE Robotics and Automation Letters, 1(1), 73–81.CrossRef
Zurück zum Zitat Pito, R. (1999). A solution to the next best view problem for automated surface acquisition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(10), 1016–1030.CrossRef Pito, R. (1999). A solution to the next best view problem for automated surface acquisition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(10), 1016–1030.CrossRef
Zurück zum Zitat Potthast, C., & Sukhatme, G. S. (2014). A probabilistic framework for next best view estimation in a cluttered environment. Journal of Visual Communication and Image Representation, 25(1), 148–164.CrossRef Potthast, C., & Sukhatme, G. S. (2014). A probabilistic framework for next best view estimation in a cluttered environment. Journal of Visual Communication and Image Representation, 25(1), 148–164.CrossRef
Zurück zum Zitat Reed, M. K., & Allen, P. K. (2000). Constraint-based sensor planning for scene modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12), 1460–1467.CrossRef Reed, M. K., & Allen, P. K. (2000). Constraint-based sensor planning for scene modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12), 1460–1467.CrossRef
Zurück zum Zitat Roth, H., & Vona, M. (2012). Moving Volume KinectFusion. In Proceedings of the British machine vision conference. BMVA Press, pp. 112.1–112.11. Roth, H., & Vona, M. (2012). Moving Volume KinectFusion. In Proceedings of the British machine vision conference. BMVA Press, pp. 112.1–112.11.
Zurück zum Zitat Stampfer, D., Lutz, M., & Schlegel, C. (2012). Information driven sensor placement for robust active object recognition based on multiple views. In IEEE international conference on technologies for practical robot applications (TePRA), pp. 133–138. Stampfer, D., Lutz, M., & Schlegel, C. (2012). Information driven sensor placement for robust active object recognition based on multiple views. In IEEE international conference on technologies for practical robot applications (TePRA), pp. 133–138.
Zurück zum Zitat Stein, S. C., Worgotter, F., Schoeler, M., Papon, J., & Kulvicius, T. (2014). Convexity based object partitioning for robot applications. In IEEE international conference on robotics and automation (ICRA), pp. 3213–3220. Stein, S. C., Worgotter, F., Schoeler, M., Papon, J., & Kulvicius, T. (2014). Convexity based object partitioning for robot applications. In IEEE international conference on robotics and automation (ICRA), pp. 3213–3220.
Zurück zum Zitat Tateno, K., Tombari, F., & Navab, N. (2015). Real-time and scalable incremental segmentation on dense SLAM. In IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 4465–4472. Tateno, K., Tombari, F., & Navab, N. (2015). Real-time and scalable incremental segmentation on dense SLAM. In IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 4465–4472.
Zurück zum Zitat Torabi, L., & Gupta, K. (2010). Integrated view and path planning for an autonomous six-DOF eye-in-hand object modeling system. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 4516–4521. Torabi, L., & Gupta, K. (2010). Integrated view and path planning for an autonomous six-DOF eye-in-hand object modeling system. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 4516–4521.
Zurück zum Zitat Tsuda, A., Kakiuchi, Y., Nozawa, S., Ueda, R., Okada, K., & Inaba, M. (2012). On-line next best grasp selection for in-hand object 3D modeling with dual-arm coordination. In IEEE international conference on robotics and automation (ICRA), pp. 1799–1804. Tsuda, A., Kakiuchi, Y., Nozawa, S., Ueda, R., Okada, K., & Inaba, M. (2012). On-line next best grasp selection for in-hand object 3D modeling with dual-arm coordination. In IEEE international conference on robotics and automation (ICRA), pp. 1799–1804.
Zurück zum Zitat Uckermann, A., Haschke, R., & Ritter, H. (2012). Real-time 3D segmentation of cluttered scenes for robot grasping. In 12th IEEE-RAS international conference on humanoid robots (humanoids), pp. 198–203. Uckermann, A., Haschke, R., & Ritter, H. (2012). Real-time 3D segmentation of cluttered scenes for robot grasping. In 12th IEEE-RAS international conference on humanoid robots (humanoids), pp. 198–203.
Zurück zum Zitat Uckermann, A., Eibrechter, C., Haschke, R., & Ritter, H. (2014). Real-time hierarchical scene segmentation and classification. In 14th IEEE-RAS international conference on humanoid robots (humanoids), pp. 225–231. Uckermann, A., Eibrechter, C., Haschke, R., & Ritter, H. (2014). Real-time hierarchical scene segmentation and classification. In 14th IEEE-RAS international conference on humanoid robots (humanoids), pp. 225–231.
Zurück zum Zitat van Hoof, H., Kroemer, O., & Peters, J. (2014). Probabilistic segmentation and targeted exploration of objects in cluttered environments. IEEE Transactions on Robotics, 30(5), 1198–1209.CrossRef van Hoof, H., Kroemer, O., & Peters, J. (2014). Probabilistic segmentation and targeted exploration of objects in cluttered environments. IEEE Transactions on Robotics, 30(5), 1198–1209.CrossRef
Zurück zum Zitat Varadarajan, K. M., & Vincze, M. (2011). Object part segmentation and classification in range images for grasping. In 15th International conference on advanced robotics (ICAR), pp. 21–27. Varadarajan, K. M., & Vincze, M. (2011). Object part segmentation and classification in range images for grasping. In 15th International conference on advanced robotics (ICAR), pp. 21–27.
Zurück zum Zitat Vasquez-Gomez, J. I., Lopez-Damian, E., & Sucar, L. E. (2009). View planning for 3D object reconstruction. In IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 4015–4020. Vasquez-Gomez, J. I., Lopez-Damian, E., & Sucar, L. E. (2009). View planning for 3D object reconstruction. In IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 4015–4020.
Zurück zum Zitat Wagner, R., Frese, U., & Bauml, B. (2013). Real-time dense multi-scale workspace modeling on a humanoid robot. In IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 5164–5171. Wagner, R., Frese, U., & Bauml, B. (2013). Real-time dense multi-scale workspace modeling on a humanoid robot. In IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 5164–5171.
Zurück zum Zitat Walck G., & Drouin, M. (2010). Automatic observation for 3D reconstruction of unknown objects using visual servoing. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 2727–2732. Walck G., & Drouin, M. (2010). Automatic observation for 3D reconstruction of unknown objects using visual servoing. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 2727–2732.
Zurück zum Zitat Welke, K., Issac, J., Schiebener, D., Asfour, T., & Dillmann, R. (2010). Autonomous acquisition of visual multi-view object representations for object recognition on a humanoid robot. In: IEEE international conference on robotics and automation (ICRA), pp. 2012–2019. Welke, K., Issac, J., Schiebener, D., Asfour, T., & Dillmann, R. (2010). Autonomous acquisition of visual multi-view object representations for object recognition on a humanoid robot. In: IEEE international conference on robotics and automation (ICRA), pp. 2012–2019.
Zurück zum Zitat Whaite, P., & Ferrie, F. P. (1997). Autonomous exploration: Driven by uncertainty. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(3), 193–205.CrossRef Whaite, P., & Ferrie, F. P. (1997). Autonomous exploration: Driven by uncertainty. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(3), 193–205.CrossRef
Zurück zum Zitat Wu, K., Ranasinghe, R., & Dissanayake, G. (2015). Active recognition and pose estimation of household objects in clutter. In IEEE international conference on robotics and automation (ICRA), pp. 4230–4237. Wu, K., Ranasinghe, R., & Dissanayake, G. (2015). Active recognition and pose estimation of household objects in clutter. In IEEE international conference on robotics and automation (ICRA), pp. 4230–4237.
Zurück zum Zitat Xu, K., Huang, H., Shi, Y., Li, H., Long, P., Caichen, J. et al. (2015). Autoscanning for coupled scene reconstruction and proactive object analysis. ACM Transactions on Graphics, 34(6), 177:1–177:14. Xu, K., Huang, H., Shi, Y., Li, H., Long, P., Caichen, J. et al. (2015). Autoscanning for coupled scene reconstruction and proactive object analysis. ACM Transactions on Graphics, 34(6), 177:1–177:14.
Zurück zum Zitat Yu, Y., & Gupta, K. (2004). C-space entropy: A measure for view planning and exploration for general robot-sensor systems in unknown environments. The International Journal of Robotics Research, 23(12), 1197–1223. Yu, Y., & Gupta, K. (2004). C-space entropy: A measure for view planning and exploration for general robot-sensor systems in unknown environments. The International Journal of Robotics Research, 23(12), 1197–1223.
Metadaten
Titel
Contour-based next-best view planning from point cloud segmentation of unknown objects
verfasst von
Riccardo Monica
Jacopo Aleotti
Publikationsdatum
06.02.2017
Verlag
Springer US
Erschienen in
Autonomous Robots / Ausgabe 2/2018
Print ISSN: 0929-5593
Elektronische ISSN: 1573-7527
DOI
https://doi.org/10.1007/s10514-017-9618-0

Weitere Artikel der Ausgabe 2/2018

Autonomous Robots 2/2018 Zur Ausgabe

Neuer Inhalt