Skip to main content
Erschienen in: Autonomous Robots 1/2019

13.02.2018

Learning to exploit passive compliance for energy-efficient gait generation on a compliant humanoid

verfasst von: Petar Kormushev, Barkan Ugurlu, Darwin G. Caldwell, Nikos G. Tsagarakis

Erschienen in: Autonomous Robots | Ausgabe 1/2019

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Modern humanoid robots include not only active compliance but also passive compliance. Apart from improved safety and dependability, availability of passive elements, such as springs, opens up new possibilities for improving the energy efficiency. With this in mind, this paper addresses the challenging open problem of exploiting the passive compliance for the purpose of energy efficient humanoid walking. To this end, we develop a method comprising two parts: an optimization part that finds an optimal vertical center-of-mass trajectory, and a walking pattern generator part that uses this trajectory to produce a dynamically-balanced gait. For the optimization part, we propose a reinforcement learning approach that dynamically evolves the policy parametrization during the learning process. By gradually increasing the representational power of the policy parametrization, it manages to find better policies in a faster and computationally efficient way. For the walking generator part, we develop a variable-center-of-mass-height ZMP-based bipedal walking pattern generator. The method is tested in real-world experiments with the bipedal robot COMAN and achieves a significant 18% reduction in the electric energy consumption by learning to efficiently use the passive compliance of the robot.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
2
Spring deflections are mechanically limited within 11.25 degrees in COMAN.
 
Literatur
Zurück zum Zitat Abdolmaleki, A., Lau, N., Reis, L. P., Peters, J., & Neumann, G. (2016). Contextual policy search for linear and nonlinear generalization of a humanoid walking controller. Journal of Intelligent and Robotic Systems, 83(3), 393–408.CrossRef Abdolmaleki, A., Lau, N., Reis, L. P., Peters, J., & Neumann, G. (2016). Contextual policy search for linear and nonlinear generalization of a humanoid walking controller. Journal of Intelligent and Robotic Systems, 83(3), 393–408.CrossRef
Zurück zum Zitat Amran, C. A., Ugurlu, B., & Kawamura, A. (2010). Energy and torque efficient ZMP-based bipedal walking with varying center of mass height. In Proceedings of the IEEE international workshop on advanced motion control (pp. 408–413). Nagaoka, Japan. Amran, C. A., Ugurlu, B., & Kawamura, A. (2010). Energy and torque efficient ZMP-based bipedal walking with varying center of mass height. In Proceedings of the IEEE international workshop on advanced motion control (pp. 408–413). Nagaoka, Japan.
Zurück zum Zitat Bernstein, A., & Shimkin, N. (2010). Adaptive-resolution reinforcement learning with polynomial exploration in deterministic domains. Machine Learning, 81(3), 359–397.MathSciNetCrossRef Bernstein, A., & Shimkin, N. (2010). Adaptive-resolution reinforcement learning with polynomial exploration in deterministic domains. Machine Learning, 81(3), 359–397.MathSciNetCrossRef
Zurück zum Zitat Calandra, R., Seyfarth, A., Peters, J., & Deisenroth, M. P. (2014). An experimental comparison of bayesian optimization for bipedal locomotion. In Proceedings of 2014 IEEE international conference on robotics and automation (ICRA), Hong Kong. Calandra, R., Seyfarth, A., Peters, J., & Deisenroth, M. P. (2014). An experimental comparison of bayesian optimization for bipedal locomotion. In Proceedings of 2014 IEEE international conference on robotics and automation (ICRA), Hong Kong.
Zurück zum Zitat Carpentier, J., Tonneau, S., Naveau, M., Stasse, O., & Mansard, N. (2016). A versatile and efficient pattern generator for generalized legged locomotion. In Proceedings of the IEEE international conference on robotics and automation (ICRA) (pp. 1–6). Stockholm, Sweden. Carpentier, J., Tonneau, S., Naveau, M., Stasse, O., & Mansard, N. (2016). A versatile and efficient pattern generator for generalized legged locomotion. In Proceedings of the IEEE international conference on robotics and automation (ICRA) (pp. 1–6). Stockholm, Sweden.
Zurück zum Zitat Choi, Y., Kim, D., Oh, Y., & You, B. (2007). Posture/walking control for humanoid robot based on resolution of CoM Jacobian with embedded motion. IEEE Transactions on Robotics, 23(6), 1285–1293.CrossRef Choi, Y., Kim, D., Oh, Y., & You, B. (2007). Posture/walking control for humanoid robot based on resolution of CoM Jacobian with embedded motion. IEEE Transactions on Robotics, 23(6), 1285–1293.CrossRef
Zurück zum Zitat Coates, A., Abbeel, P., & Ng, A. Y. (2009). Apprenticeship learning for helicopter control. Communications of the ACM, 52(7), 97–105.CrossRef Coates, A., Abbeel, P., & Ng, A. Y. (2009). Apprenticeship learning for helicopter control. Communications of the ACM, 52(7), 97–105.CrossRef
Zurück zum Zitat Deisenroth, M. P., Calandra, R., Seyfarth, A., & Peters, J. (2012). Toward fast policy search for learning legged locomotion. In 2012 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 1787–1792). Algarve, Portugal: IEEE. Deisenroth, M. P., Calandra, R., Seyfarth, A., & Peters, J. (2012). Toward fast policy search for learning legged locomotion. In 2012 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 1787–1792). Algarve, Portugal: IEEE.
Zurück zum Zitat Geyer, H., Seyfarth, A., & Blickhan, R. (2006). Compliant leg behaviour explains basic dynamics of walking and running. Proceedings of the Royal Society B: Biological Sciences, 273(1603), 2861–2867.CrossRef Geyer, H., Seyfarth, A., & Blickhan, R. (2006). Compliant leg behaviour explains basic dynamics of walking and running. Proceedings of the Royal Society B: Biological Sciences, 273(1603), 2861–2867.CrossRef
Zurück zum Zitat Guenter, F., Hersch, M., Calinon, S., & Billard, A. (2007). Reinforcement learning for imitating constrained reaching movements. Advanced Robotics, 21(13), 1521–1544. Guenter, F., Hersch, M., Calinon, S., & Billard, A. (2007). Reinforcement learning for imitating constrained reaching movements. Advanced Robotics, 21(13), 1521–1544.
Zurück zum Zitat Harada, K., Kajita, S., Kaneko, K., & Hirukawa, H. (2004). An analytical method on real-time gait planning for a humanoid robot. International Journal of Humanoid Robotics, 3(1), 1–19.CrossRef Harada, K., Kajita, S., Kaneko, K., & Hirukawa, H. (2004). An analytical method on real-time gait planning for a humanoid robot. International Journal of Humanoid Robotics, 3(1), 1–19.CrossRef
Zurück zum Zitat Herzog, A., Schaal, S., & Righetti, L. (2016). Structured contact force optimization for kino-dynamic motion generation. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (IROS), Daejeon, Korea (pp. 1–6). Herzog, A., Schaal, S., & Righetti, L. (2016). Structured contact force optimization for kino-dynamic motion generation. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (IROS), Daejeon, Korea (pp. 1–6).
Zurück zum Zitat Hu, Y., Felis, M., & Mombaur, K. (2014). Compliance analysis of human leg joints in level ground walking with an optimal control approach. In Proceedings of the IEEE international conference on humanoid robots (humanoids), Madrid, Spain (pp. 881–886). Hu, Y., Felis, M., & Mombaur, K. (2014). Compliance analysis of human leg joints in level ground walking with an optimal control approach. In Proceedings of the IEEE international conference on humanoid robots (humanoids), Madrid, Spain (pp. 881–886).
Zurück zum Zitat Ishikawa, M., Komi, P. V., Grey, M. J., Lepola, V., & Bruggemann, P. G. (2005). Muscle-tendon interaction and elastic energy usage in human walking. The Journal of Applied Physiology, 99(2), 603–608.CrossRef Ishikawa, M., Komi, P. V., Grey, M. J., Lepola, V., & Bruggemann, P. G. (2005). Muscle-tendon interaction and elastic energy usage in human walking. The Journal of Applied Physiology, 99(2), 603–608.CrossRef
Zurück zum Zitat Jafari, A., Tsagarakis, N. G., & Caldwell, D. G. (2013). A novel intrinsically energy efficient actuator with adjustable stiffness (AwAS). IEEE/ASME Transactions on Mechatronics, 18(1), 355–365.CrossRef Jafari, A., Tsagarakis, N. G., & Caldwell, D. G. (2013). A novel intrinsically energy efficient actuator with adjustable stiffness (AwAS). IEEE/ASME Transactions on Mechatronics, 18(1), 355–365.CrossRef
Zurück zum Zitat Kagami, S., Kitagawa, T., Nishiwaki, K., Sugihara, T., Inaba, T., & Inoue, H. (2002). A fast dynamically equilibrated walking trajectory generation method of humanoid robot. Autonomous Robots, 2(1), 71–82.CrossRefMATH Kagami, S., Kitagawa, T., Nishiwaki, K., Sugihara, T., Inaba, T., & Inoue, H. (2002). A fast dynamically equilibrated walking trajectory generation method of humanoid robot. Autonomous Robots, 2(1), 71–82.CrossRefMATH
Zurück zum Zitat Kajita, S., Kanehiro, F., Kaneko, K., Fujiwara, K., Yokoi, K., & Hirukawa, H. (2003). Biped walking pattern generation by using preview control. In Proceedings of the IEEE international conference on robotics and automation (ICRA), Taipei, Taiwan (pp. 1620–1626). Kajita, S., Kanehiro, F., Kaneko, K., Fujiwara, K., Yokoi, K., & Hirukawa, H. (2003). Biped walking pattern generation by using preview control. In Proceedings of the IEEE international conference on robotics and automation (ICRA), Taipei, Taiwan (pp. 1620–1626).
Zurück zum Zitat Kober, J., & Peters, J. (2009). Learning motor primitives for robotics. In Proceedings of the IEEE international conference on robotics and automation (ICRA) (pp. 2112–2118). Kobe, Japan. Kober, J., & Peters, J. (2009). Learning motor primitives for robotics. In Proceedings of the IEEE international conference on robotics and automation (ICRA) (pp. 2112–2118). Kobe, Japan.
Zurück zum Zitat Koch, K.H., Clever, D., Mombaur, K., & Endres, D. (2015). Learning movement primitives from optimal and dynamically feasible trajectories for humanoid walking. In Proceedings IEEE-Ras Intl Conf. on Humanoid Robots (Humanoids) (pp. 866–873). Seoul, Korea. Koch, K.H., Clever, D., Mombaur, K., & Endres, D. (2015). Learning movement primitives from optimal and dynamically feasible trajectories for humanoid walking. In Proceedings IEEE-Ras Intl Conf. on Humanoid Robots (Humanoids) (pp. 866–873). Seoul, Korea.
Zurück zum Zitat Kohl, N., & Stone, P. (2004). Machine learning for fast quadrupedal locomotion. In Proceedings National Conference on Artificial Intelligence, pages 611–616. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999. Kohl, N., & Stone, P. (2004). Machine learning for fast quadrupedal locomotion. In Proceedings National Conference on Artificial Intelligence, pages 611–616. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999.
Zurück zum Zitat Kormushev, P., Calinon, S., & Caldwell, D. G. (2010). Robot motor skill coordination with EM-based reinforcement learning. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (IROS), Taipei, Taiwan (pp. 3232–3237). Kormushev, P., Calinon, S., & Caldwell, D. G. (2010). Robot motor skill coordination with EM-based reinforcement learning. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (IROS), Taipei, Taiwan (pp. 3232–3237).
Zurück zum Zitat Kormushev, P., Nenchev, D. N., Calinon, S., & Caldwell, D. G. (2011a). Upper-body kinesthetic teaching of a free-standing humanoid robot. In Proceedings of the IEEE international conference on robotics and automation (ICRA), Shanghai, China. Kormushev, P., Nenchev, D. N., Calinon, S., & Caldwell, D. G. (2011a). Upper-body kinesthetic teaching of a free-standing humanoid robot. In Proceedings of the IEEE international conference on robotics and automation (ICRA), Shanghai, China.
Zurück zum Zitat Kormushev, P., Ugurlu, B., Calinon, S., Tsagarakis, N. G., & Caldwell, D. G. (2011b). Bipedal walking energy minimization by reinforcement learning with evolving policy parameterization. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems(IROS), San Francisco, USA (pp. 318–324). Kormushev, P., Ugurlu, B., Calinon, S., Tsagarakis, N. G., & Caldwell, D. G. (2011b). Bipedal walking energy minimization by reinforcement learning with evolving policy parameterization. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems(IROS), San Francisco, USA (pp. 318–324).
Zurück zum Zitat Liu, Q., Zhao, J., Schutz, S., & Berns, K. (2015). Adaptive motor patterns and reflexes for bipedal locomotion on rough terrain. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, Hamburg, Germany (pp. 3856–3861). Liu, Q., Zhao, J., Schutz, S., & Berns, K. (2015). Adaptive motor patterns and reflexes for bipedal locomotion on rough terrain. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, Hamburg, Germany (pp. 3856–3861).
Zurück zum Zitat McGeer, T. (1990). Passive dynamic walking. International Journal of Robotics Research, 9(2), 62–82.CrossRef McGeer, T. (1990). Passive dynamic walking. International Journal of Robotics Research, 9(2), 62–82.CrossRef
Zurück zum Zitat Minekata, H., Seki, H., & Tadakuma, S. (2008). A study of energy-saving shoes for robot considering lateral plane motion. IEEE Transactions on Industrial Electronics, 55(3), 1271–1276.CrossRef Minekata, H., Seki, H., & Tadakuma, S. (2008). A study of energy-saving shoes for robot considering lateral plane motion. IEEE Transactions on Industrial Electronics, 55(3), 1271–1276.CrossRef
Zurück zum Zitat Miyamoto, H., Morimoto, J., Doya, K., & Kawato, M. (2004). Reinforcement learning with via-point representation. Neural Networks, 17, 299–305.CrossRefMATH Miyamoto, H., Morimoto, J., Doya, K., & Kawato, M. (2004). Reinforcement learning with via-point representation. Neural Networks, 17, 299–305.CrossRefMATH
Zurück zum Zitat Moore, A. W., & Atkeson, C. G. (1995). The parti-game algorithm for variable resolution reinforcement learning in multidimensional state-spaces. Machine Learning, 21, 199–233. Moore, A. W., & Atkeson, C. G. (1995). The parti-game algorithm for variable resolution reinforcement learning in multidimensional state-spaces. Machine Learning, 21, 199–233.
Zurück zum Zitat Morimoto, J., & Atkeson, C. G. (2007). Learning biped locomotion: Application of poincare-map-based reinforcement learning. IEEE Robotics and Automation Magazine, 14(2), 41–51.CrossRef Morimoto, J., & Atkeson, C. G. (2007). Learning biped locomotion: Application of poincare-map-based reinforcement learning. IEEE Robotics and Automation Magazine, 14(2), 41–51.CrossRef
Zurück zum Zitat Orin, D. E., Goswami, A., & Lee, S.-H. (2013). Centroidal dynamics of a humanoid robot. Autonomous Robots, 35(2), 161–176.CrossRef Orin, D. E., Goswami, A., & Lee, S.-H. (2013). Centroidal dynamics of a humanoid robot. Autonomous Robots, 35(2), 161–176.CrossRef
Zurück zum Zitat Ortega, J. D., & Farley, C. T. (2005). Minimizing center of mass vertical movement increases metabolic cost in walking. The Journal of Applied Physiology, 581(9), 2099–2107.CrossRef Ortega, J. D., & Farley, C. T. (2005). Minimizing center of mass vertical movement increases metabolic cost in walking. The Journal of Applied Physiology, 581(9), 2099–2107.CrossRef
Zurück zum Zitat Pastor, P., Kalakrishnan, M., Chitta, S., Theodorou, E., & Schaal, S. (2011). Skill learning and task outcome prediction for manipulation. In International conference on robotics and automation (ICRA), Shanghai, China. Pastor, P., Kalakrishnan, M., Chitta, S., Theodorou, E., & Schaal, S. (2011). Skill learning and task outcome prediction for manipulation. In International conference on robotics and automation (ICRA), Shanghai, China.
Zurück zum Zitat Peters, J., & Schaal, S. (2006). Policy gradient methods for robotics. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (IROS), Beijing, China. Peters, J., & Schaal, S. (2006). Policy gradient methods for robotics. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (IROS), Beijing, China.
Zurück zum Zitat Peters, J., & Schaal, S. (2008a). Natural actor-critic. Neurocomputing, 71(7–9), 1180–1190. Peters, J., & Schaal, S. (2008a). Natural actor-critic. Neurocomputing, 71(7–9), 1180–1190.
Zurück zum Zitat Peters, J., & Schaal, S. (2008b). Reinforcement learning of motor skills with policy gradients. Neural Networks, 21(4), 682–697. Peters, J., & Schaal, S. (2008b). Reinforcement learning of motor skills with policy gradients. Neural Networks, 21(4), 682–697.
Zurück zum Zitat Rosado, J., Silva, F., & Santos, V. (2015). Biped walking learning from imitation using dynamic movement primitives. In L. P. Reis, A. P. Moreira, P. U. Lima, L. Montano, & V. Munoz Martinez (Eds.), Advances in intelligent systems and computing (pp. 185–196). Switzerland: Springer International Publishing. Rosado, J., Silva, F., & Santos, V. (2015). Biped walking learning from imitation using dynamic movement primitives. In L. P. Reis, A. P. Moreira, P. U. Lima, L. Montano, & V. Munoz Martinez (Eds.), Advances in intelligent systems and computing (pp. 185–196). Switzerland: Springer International Publishing.
Zurück zum Zitat Rosenstein, M. T., Barto, A. G., & Van Emmerik, R. E. A. (2006). Learning at the level of synergies for a robot weightlifter. Robotics and Autonomous Systems, 54(8), 706–717.CrossRef Rosenstein, M. T., Barto, A. G., & Van Emmerik, R. E. A. (2006). Learning at the level of synergies for a robot weightlifter. Robotics and Autonomous Systems, 54(8), 706–717.CrossRef
Zurück zum Zitat Schaal, S., Ijspeert, A., & Billard, A. (2003). Computational approaches to motor learning by imitation. Philosophical Transaction of the Royal Society of London: Series B, Biological Sciences, 358(1431), 537–547.CrossRef Schaal, S., Ijspeert, A., & Billard, A. (2003). Computational approaches to motor learning by imitation. Philosophical Transaction of the Royal Society of London: Series B, Biological Sciences, 358(1431), 537–547.CrossRef
Zurück zum Zitat Shafii, N., Lau, N., & Reis, L. P. (2015). Learning to walk fast: Optimized hip height movement for simulated and real humanoid robots. Journal of Intelligent and Robotic Systems, 80(3), 555–571.CrossRef Shafii, N., Lau, N., & Reis, L. P. (2015). Learning to walk fast: Optimized hip height movement for simulated and real humanoid robots. Journal of Intelligent and Robotic Systems, 80(3), 555–571.CrossRef
Zurück zum Zitat Shen, H., Yosinski, J., Kormushev, P., Caldwell, D. G., & Lipson, H. (2012). Learning fast quadruped robot gaits with the rl power spline parameterization. Bulgarian Academy of Sciences, Cybernetics and Information Technologies, 12(3), 66–75.CrossRef Shen, H., Yosinski, J., Kormushev, P., Caldwell, D. G., & Lipson, H. (2012). Learning fast quadruped robot gaits with the rl power spline parameterization. Bulgarian Academy of Sciences, Cybernetics and Information Technologies, 12(3), 66–75.CrossRef
Zurück zum Zitat Stulp, F., Buchli, J., Theodorou, E., & Schaal, S. (2010). Reinforcement learning of full-body humanoid motor skills. In Proceedings of the IEEE international conference on humanoid robots, Nashville, TN, USA (pp. 405–410). Stulp, F., Buchli, J., Theodorou, E., & Schaal, S. (2010). Reinforcement learning of full-body humanoid motor skills. In Proceedings of the IEEE international conference on humanoid robots, Nashville, TN, USA (pp. 405–410).
Zurück zum Zitat Sugihara, T., & Nakamura, Y. (2009). Boundary condition relaxation method for stepwise pedipulation planning of biped robot. IEEE Transactions on Robotics, 25(3), 658–669.CrossRef Sugihara, T., & Nakamura, Y. (2009). Boundary condition relaxation method for stepwise pedipulation planning of biped robot. IEEE Transactions on Robotics, 25(3), 658–669.CrossRef
Zurück zum Zitat Theodorou, E., Buchli, J., & Schaal, S. (2010a). Reinforcement learning of motor skills in high dimensions: A path integral approach. In Proceedings of the IEEE international conference on robotics and automation (ICRA), Anchorage, US. Theodorou, E., Buchli, J., & Schaal, S. (2010a). Reinforcement learning of motor skills in high dimensions: A path integral approach. In Proceedings of the IEEE international conference on robotics and automation (ICRA), Anchorage, US.
Zurück zum Zitat Theodorou, E., Buchli, J., & Schaal, S. (2010b). A generalized path integral control approach to reinforcement learning. The Journal of Machine Learning Research, 11, 3137–3181.MathSciNetMATH Theodorou, E., Buchli, J., & Schaal, S. (2010b). A generalized path integral control approach to reinforcement learning. The Journal of Machine Learning Research, 11, 3137–3181.MathSciNetMATH
Zurück zum Zitat Ugurlu, B., Hirabayashi, T., & Kawamura, A. (2009). A unified control frame for stable bipedal walking. In IEEE international conference on industrial electronics and control, Porto, Portugal (pp. 4167–4172). Ugurlu, B., Hirabayashi, T., & Kawamura, A. (2009). A unified control frame for stable bipedal walking. In IEEE international conference on industrial electronics and control, Porto, Portugal (pp. 4167–4172).
Zurück zum Zitat Ugurlu, B., Tsagarakis, N. G., Spyrakos-Papastravridis, E., & Caldwell, D. G. (2011). Compiant joint modification and real-time dynamic walking implementation on bipedal robot cCub. In Proceedings of the IEEE international conference on mechatronics, Istanbul, Turkey. Ugurlu, B., Tsagarakis, N. G., Spyrakos-Papastravridis, E., & Caldwell, D. G. (2011). Compiant joint modification and real-time dynamic walking implementation on bipedal robot cCub. In Proceedings of the IEEE international conference on mechatronics, Istanbul, Turkey.
Zurück zum Zitat Ugurlu, B., Saglia, J. A., Tsagarakis, N. G., Morfey, S., & Caldwell, D. G. (2014). Bipedal hopping pattern generation for passively compliant humanoids: Exploiting the resonance. IEEE Transactions on Industrial Electronics, 61(10), 5431–5443.CrossRef Ugurlu, B., Saglia, J. A., Tsagarakis, N. G., Morfey, S., & Caldwell, D. G. (2014). Bipedal hopping pattern generation for passively compliant humanoids: Exploiting the resonance. IEEE Transactions on Industrial Electronics, 61(10), 5431–5443.CrossRef
Zurück zum Zitat Wada, Y., & Sumita, K. (2004). A reinforcement learning scheme for acquisition of via-point representation of human motion. In Proceedings of the IEEE International Conference on Neural Networks, 2, 1109–1114. Wada, Y., & Sumita, K. (2004). A reinforcement learning scheme for acquisition of via-point representation of human motion. In Proceedings of the IEEE International Conference on Neural Networks, 2, 1109–1114.
Zurück zum Zitat Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3–4), 229–256.MATH Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3–4), 229–256.MATH
Zurück zum Zitat Wisse, M., Schwab, A. L., van der Linde, R. Q., & van der Helm, F. C. T. (2005). How to keep from falling forward: Elementary swing leg action for passive dynamic walkers. IEEE Transactions on Robotics, 21(3), 393–401.CrossRef Wisse, M., Schwab, A. L., van der Linde, R. Q., & van der Helm, F. C. T. (2005). How to keep from falling forward: Elementary swing leg action for passive dynamic walkers. IEEE Transactions on Robotics, 21(3), 393–401.CrossRef
Zurück zum Zitat Xiaoxiang, Y., & Iida, F. (2014). Minimalistic models of an energy-efficient vertical-hopping robot. IEEE Transactions on Industrial Electronics, 61(2), 1053–1062.CrossRef Xiaoxiang, Y., & Iida, F. (2014). Minimalistic models of an energy-efficient vertical-hopping robot. IEEE Transactions on Industrial Electronics, 61(2), 1053–1062.CrossRef
Metadaten
Titel
Learning to exploit passive compliance for energy-efficient gait generation on a compliant humanoid
verfasst von
Petar Kormushev
Barkan Ugurlu
Darwin G. Caldwell
Nikos G. Tsagarakis
Publikationsdatum
13.02.2018
Verlag
Springer US
Erschienen in
Autonomous Robots / Ausgabe 1/2019
Print ISSN: 0929-5593
Elektronische ISSN: 1573-7527
DOI
https://doi.org/10.1007/s10514-018-9697-6

Weitere Artikel der Ausgabe 1/2019

Autonomous Robots 1/2019 Zur Ausgabe

Neuer Inhalt