Skip to main content
Erschienen in: Autonomous Robots 3/2012

01.10.2012

Global motion planning under uncertain motion, sensing, and environment map

verfasst von: Hanna Kurniawati, Tirthankar Bandyopadhyay, Nicholas M. Patrikalakis

Erschienen in: Autonomous Robots | Ausgabe 3/2012

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Uncertainty in motion planning is often caused by three main sources: motion error, sensing error, and imperfect environment map. Despite the significant effect of all three sources of uncertainty to motion planning problems, most planners take into account only one or at most two of them. We propose a new motion planner, called Guided Cluster Sampling (GCS), that takes into account all three sources of uncertainty for robots with active sensing capabilities. GCS uses the Partially Observable Markov Decision Process (POMDP) framework and the point-based POMDP approach. Although point-based POMDPs have shown impressive progress over the past few years, it performs poorly when the environment map is imperfect. This poor performance is due to the extremely high dimensional state space, which translates to the extremely large belief space B.
We alleviate this problem by constructing a more suitable sampling distribution based on the observations that when the robot has active sensing capability, B can be partitioned into a collection of much smaller sub-spaces, and an optimal policy can often be generated by sufficient sampling of a small subset of the collection. Utilizing these observations, GCS samples B in two-stages, a subspace is sampled from the collection and then a belief is sampled from the subspace.
It uses information from the set of sampled sub-spaces and sampled beliefs to guide subsequent sampling. Simulation results on marine robotics scenarios suggest that GCS can generate reasonable policies for motion planning problems with uncertain motion, sensing, and environment map, that are unsolvable by the best point-based POMDPs today. Furthermore, GCS handles POMDPs with continuous state, action, and observation spaces. We show that for a class of POMDPs that often occur in robot motion planning, given enough time, GCS converges to the optimal policy.
To the best of our knowledge, this is the first convergence result for point-based POMDPs with continuous action space.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
The proofs of all theorems are in Appendix.
 
2
See footnote 1.
 
3
See footnote 1.
 
Literatur
Zurück zum Zitat Alterovitz, R., Simeon, T., & Goldberg, K. (2007). The stochastic motion roadmap: a sampling framework for planning with Markov motion uncertainty. In RSS. Alterovitz, R., Simeon, T., & Goldberg, K. (2007). The stochastic motion roadmap: a sampling framework for planning with Markov motion uncertainty. In RSS.
Zurück zum Zitat Bai, H., Hsu, D., Lee, W. S., & Ngo, A. V. (2010). Monte Carlo Value Iteration for continuous-state POMDPs. In WAFR. Bai, H., Hsu, D., Lee, W. S., & Ngo, A. V. (2010). Monte Carlo Value Iteration for continuous-state POMDPs. In WAFR.
Zurück zum Zitat Bandyopadhyay, T., Sarcione, L., & Hover, F. (2009). A simple reactive obstacle avoidance algorithm and its application in Singapore Harbour. In FSR. Bandyopadhyay, T., Sarcione, L., & Hover, F. (2009). A simple reactive obstacle avoidance algorithm and its application in Singapore Harbour. In FSR.
Zurück zum Zitat Berg, J. V. D., Abbeel, P., & Goldberg, K. (2010). LQG-MP: optimized path planning for robots with motion uncertainty and imperfect state information. In RSS. Berg, J. V. D., Abbeel, P., & Goldberg, K. (2010). LQG-MP: optimized path planning for robots with motion uncertainty and imperfect state information. In RSS.
Zurück zum Zitat Burns, B., & Brock, O. (2007). Sampling-based motion planning with sensing uncertainty. In ICRA. Burns, B., & Brock, O. (2007). Sampling-based motion planning with sensing uncertainty. In ICRA.
Zurück zum Zitat Choset, H., Lynch, K. M., Hutchinson, S., Kantor, G., Burgard, W., Kavraki, L. E., & Thrun, S. (2005). Principles of robot motion: Theory, algorithms, and implementations. Cambridge: MIT Press. MATH Choset, H., Lynch, K. M., Hutchinson, S., Kantor, G., Burgard, W., Kavraki, L. E., & Thrun, S. (2005). Principles of robot motion: Theory, algorithms, and implementations. Cambridge: MIT Press. MATH
Zurück zum Zitat Guez, A., & Pineau, J. (2010). Multi-tasking SLAM. In ICRA. Guez, A., & Pineau, J. (2010). Multi-tasking SLAM. In ICRA.
Zurück zum Zitat Guibas, L., Hsu, D., Kurniawati, H., & Rehman, E. (2008). Bounded uncertainty roadmaps for path planning. In WAFR. Guibas, L., Hsu, D., Kurniawati, H., & Rehman, E. (2008). Bounded uncertainty roadmaps for path planning. In WAFR.
Zurück zum Zitat Hauser, K. (2010). Randomized belief-space replanning in partially-observable continuous spaces. In WAFR. Hauser, K. (2010). Randomized belief-space replanning in partially-observable continuous spaces. In WAFR.
Zurück zum Zitat Hsiao, K., Kaelbling, L. P., & Lozano-Perez, T. (2007). Grasping POMDPs. In ICRA (pp. 4685–4692). Hsiao, K., Kaelbling, L. P., & Lozano-Perez, T. (2007). Grasping POMDPs. In ICRA (pp. 4685–4692).
Zurück zum Zitat Hsu, D., Latombe, J.-C., & Motwani, R. (1999). Path planning in expansive configuration spaces. International Journal of Computational Geometry and Applications, 9(4–5), 495–512. MathSciNet Hsu, D., Latombe, J.-C., & Motwani, R. (1999). Path planning in expansive configuration spaces. International Journal of Computational Geometry and Applications, 9(4–5), 495–512. MathSciNet
Zurück zum Zitat Hsu, D., Lee, W. S., & Rong, N. (2007). What makes some POMDP problems easy to approximate? In NIPS. Hsu, D., Lee, W. S., & Rong, N. (2007). What makes some POMDP problems easy to approximate? In NIPS.
Zurück zum Zitat Kollar, T., & Roy, N. (2008). Efficient optimization of information-theoretic exploration in SLAM. In AAAI (pp. 1369–1375). Kollar, T., & Roy, N. (2008). Efficient optimization of information-theoretic exploration in SLAM. In AAAI (pp. 1369–1375).
Zurück zum Zitat Kurniawati, H., Du, Y., Hsu, D., & Lee, W. S. (2011). Motion planning under uncertainty for robotic tasks with long time horizons. The International Journal of Robotics Research, 30(3), 308–323. CrossRef Kurniawati, H., Du, Y., Hsu, D., & Lee, W. S. (2011). Motion planning under uncertainty for robotic tasks with long time horizons. The International Journal of Robotics Research, 30(3), 308–323. CrossRef
Zurück zum Zitat Kurniawati, H., Hsu, D., & Lee, W. S. (2008). SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. In RSS. Kurniawati, H., Hsu, D., & Lee, W. S. (2008). SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. In RSS.
Zurück zum Zitat Missiuro, P., & Roy, N. (2006). Adapting probabilistic roadmaps to handle uncertain maps. In ICRA. Missiuro, P., & Roy, N. (2006). Adapting probabilistic roadmaps to handle uncertain maps. In ICRA.
Zurück zum Zitat Papadimitriou, C. H., & Tsitsiklis, J. N. (1987). The complexity of Markov decision processes. Mathematics of Operations Research, 12(3), 441–450. MathSciNetMATHCrossRef Papadimitriou, C. H., & Tsitsiklis, J. N. (1987). The complexity of Markov decision processes. Mathematics of Operations Research, 12(3), 441–450. MathSciNetMATHCrossRef
Zurück zum Zitat Papadopoulos, G., Kurniawati, H., Shariff, A. S. B. M., Wong, L. J., & Patrikalakis, N. M. (2011). 3D-surface reconstruction for partially submerged marine structures using an autonomous surface vehicle. In IROS. Papadopoulos, G., Kurniawati, H., Shariff, A. S. B. M., Wong, L. J., & Patrikalakis, N. M. (2011). 3D-surface reconstruction for partially submerged marine structures using an autonomous surface vehicle. In IROS.
Zurück zum Zitat Pineau, J., Gordon, G., & Thrun, S. (2003). Point-based Value Iteration: an anytime algorithm for POMDPs. In IJCAI (pp. 1025–1032). Pineau, J., Gordon, G., & Thrun, S. (2003). Point-based Value Iteration: an anytime algorithm for POMDPs. In IJCAI (pp. 1025–1032).
Zurück zum Zitat Plaku, E., Bekris, K., Chen, B. Y., Ladd, A. M., & Kavraki, L. E. (2005). Sampling-based roadmap of trees for parallel motion planning. IEEE Transactions on Robotics, 21(4), 597–608. CrossRef Plaku, E., Bekris, K., Chen, B. Y., Ladd, A. M., & Kavraki, L. E. (2005). Sampling-based roadmap of trees for parallel motion planning. IEEE Transactions on Robotics, 21(4), 597–608. CrossRef
Zurück zum Zitat Porta, J. M., Vlassis, N., Spaan, M. T. J., & Poupart, P. (2006). Point-Based Value Iteration for continuous POMDPs. Journal of Machine Learning Research, 7(Nov), 2329–2367. MathSciNetMATH Porta, J. M., Vlassis, N., Spaan, M. T. J., & Poupart, P. (2006). Point-Based Value Iteration for continuous POMDPs. Journal of Machine Learning Research, 7(Nov), 2329–2367. MathSciNetMATH
Zurück zum Zitat Prentice, S., & Roy, N. (2007). The Belief Roadmap: Efficient planning in linear POMDPs by factoring the covariance. In ISRR. Prentice, S., & Roy, N. (2007). The Belief Roadmap: Efficient planning in linear POMDPs by factoring the covariance. In ISRR.
Zurück zum Zitat Smith, T., & Simmons, R. (2005). Point-based POMDP algorithms: Improved analysis and implementation. In UAI. Smith, T., & Simmons, R. (2005). Point-based POMDP algorithms: Improved analysis and implementation. In UAI.
Zurück zum Zitat Stachniss, C., Grisetti, G., & Burgard, W. (2005). Information gain-based exploration using Rao-Blackwellized particle filters. In RSS (pp. 65–72). Stachniss, C., Grisetti, G., & Burgard, W. (2005). Information gain-based exploration using Rao-Blackwellized particle filters. In RSS (pp. 65–72).
Zurück zum Zitat Thrun, S. (2000). Monte Carlo POMDPs. In S. A. Solla, T. K. Leen & K.-R. Müller (Eds.), NIPS (Vol. 12, pp. 1064–1070). Cambridge: MIT Press. Thrun, S. (2000). Monte Carlo POMDPs. In S. A. Solla, T. K. Leen & K.-R. Müller (Eds.), NIPS (Vol. 12, pp. 1064–1070). Cambridge: MIT Press.
Metadaten
Titel
Global motion planning under uncertain motion, sensing, and environment map
verfasst von
Hanna Kurniawati
Tirthankar Bandyopadhyay
Nicholas M. Patrikalakis
Publikationsdatum
01.10.2012
Verlag
Springer US
Erschienen in
Autonomous Robots / Ausgabe 3/2012
Print ISSN: 0929-5593
Elektronische ISSN: 1573-7527
DOI
https://doi.org/10.1007/s10514-012-9307-y

Weitere Artikel der Ausgabe 3/2012

Autonomous Robots 3/2012 Zur Ausgabe

Neuer Inhalt