Skip to main content
Erschienen in:

27.02.2021 | Project Reports

Robots Learn Increasingly Complex Tasks with Intrinsic Motivation and Automatic Curriculum Learning

Domain Knowledge by Emergence of Affordances, Hierarchical Reinforcement and Active Imitation Learning

verfasst von: Sao Mai Nguyen, Nicolas Duminy, Alexandre Manoury, Dominique Duhaut, Cedric Buche

Erschienen in: KI - Künstliche Intelligenz | Ausgabe 1/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Multi-task learning by robots poses the challenge of the domain knowledge: complexity of tasks, complexity of the actions required, relationship between tasks for transfer learning. We demonstrate that this domain knowledge can be learned to address the challenges in life-long learning. Specifically, the hierarchy between tasks of various complexities is key to infer a curriculum from simple to composite tasks. We propose a framework for robots to learn sequences of actions of unbounded complexity in order to achieve multiple control tasks of various complexity. Our hierarchical reinforcement learning framework, named SGIM-SAHT, offers a new direction of research, and tries to unify partial implementations on robot arms and mobile robots. We outline our contributions to enable robots to map multiple control tasks to sequences of actions: representations of task dependencies, an intrinsically motivated exploration to learn task hierarchies, and active imitation learning. While learning the hierarchy of tasks, it infers its curriculum by deciding which tasks to explore first, how to transfer knowledge, and when, how and whom to imitate.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

KI - Künstliche Intelligenz

The Scientific journal "KI – Künstliche Intelligenz" is the official journal of the division for artificial intelligence within the "Gesellschaft für Informatik e.V." (GI) – the German Informatics Society - with constributions from troughout the field of artificial intelligence.

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Weitere Produktempfehlungen anzeigen
Literatur
2.
Zurück zum Zitat Baranes A, Py Oudeyer (2009) R-IAC: robust intrinsically motivated exploration and active learning. IEEE Trans Auton Ment Dev 1(3):155–169CrossRef Baranes A, Py Oudeyer (2009) R-IAC: robust intrinsically motivated exploration and active learning. IEEE Trans Auton Ment Dev 1(3):155–169CrossRef
3.
Zurück zum Zitat Baranes A, Oudeyer PY (2013) Active learning of inverse models with intrinsically motivated goal exploration in robots. Robot Auton Syst 61(1):49–73CrossRef Baranes A, Oudeyer PY (2013) Active learning of inverse models with intrinsically motivated goal exploration in robots. Robot Auton Syst 61(1):49–73CrossRef
4.
Zurück zum Zitat Begus K, Southgate V (2018) Curious learners: how infants’ motivation to learn shapes and is shaped by infants’ interactions with the social world. In: Saylor MM, Ganea PA (eds) Active learning from infancy to childhood. Springer International Publishing, Cham, pp 13–37. https://doi.org/10.1007/978-3-319-77182-3_2CrossRef Begus K, Southgate V (2018) Curious learners: how infants’ motivation to learn shapes and is shaped by infants’ interactions with the social world. In: Saylor MM, Ganea PA (eds) Active learning from infancy to childhood. Springer International Publishing, Cham, pp 13–37. https://​doi.​org/​10.​1007/​978-3-319-77182-3_​2CrossRef
6.
Zurück zum Zitat Cangelosi A, Schlesinger M (2015) Developmental robotics: from babies to robots. MIT press, CambridgeCrossRef Cangelosi A, Schlesinger M (2015) Developmental robotics: from babies to robots. MIT press, CambridgeCrossRef
8.
9.
Zurück zum Zitat Deci E, Ryan RM (1985) Intrinsic motivation and self-determination in human behavior. Plenum Press, New YorkCrossRef Deci E, Ryan RM (1985) Intrinsic motivation and self-determination in human behavior. Plenum Press, New YorkCrossRef
10.
Zurück zum Zitat Duminy N, Nguyen SM, Duhaut D (2016) Strategic and interactive learning of a hierarchical set of tasks by the Poppy humanoid robot. In: ICDL-EPIROB 2016: 6th Joint IEEE international conference developmental learning and epigenetic robotics, pp 204–209. https://doi.org/10.1109/DEVLRN.2016.7846820 Duminy N, Nguyen SM, Duhaut D (2016) Strategic and interactive learning of a hierarchical set of tasks by the Poppy humanoid robot. In: ICDL-EPIROB 2016: 6th Joint IEEE international conference developmental learning and epigenetic robotics, pp 204–209. https://​doi.​org/​10.​1109/​DEVLRN.​2016.​7846820
12.
Zurück zum Zitat Duminy N, Nguyen SM, Duhaut D (2018b) Effects of social guidance on a robot learning sequences of policies in hierarchical learning. In: IEEE (ed) International conference on systems man and cybernetics Duminy N, Nguyen SM, Duhaut D (2018b) Effects of social guidance on a robot learning sequences of policies in hierarchical learning. In: IEEE (ed) International conference on systems man and cybernetics
13.
Zurück zum Zitat Duminy N, Nguyen SM, Duhaut D (2018c) Learning a set of interrelated tasks by using sequences of motor policies for a strategic intrinsically motivated learner. In: IEEE international on robotic computing, pp 288–291 Duminy N, Nguyen SM, Duhaut D (2018c) Learning a set of interrelated tasks by using sequences of motor policies for a strategic intrinsically motivated learner. In: IEEE international on robotic computing, pp 288–291
15.
Zurück zum Zitat Elman J (1993) Learning and development in neural networks: the importance of starting small. Cognition 48:71–99CrossRef Elman J (1993) Learning and development in neural networks: the importance of starting small. Cognition 48:71–99CrossRef
16.
Zurück zum Zitat Forestier S, Mollard Y, Oudeyer P (2017) Intrinsically motivated goal exploration processes with automatic curriculum learning. CoRR abs/1708.02190. arxiv:1708.02190 Forestier S, Mollard Y, Oudeyer P (2017) Intrinsically motivated goal exploration processes with automatic curriculum learning. CoRR abs/1708.02190. arxiv:​1708.​02190
17.
Zurück zum Zitat Gibson JJ (1979) The theory of affordances. In: Shaw R, Bransford J (eds) Perceiving, acting, and knowing. Houghton Mifflin, Boston, pp 67–82 Gibson JJ (1979) The theory of affordances. In: Shaw R, Bransford J (eds) Perceiving, acting, and knowing. Houghton Mifflin, Boston, pp 67–82
18.
Zurück zum Zitat Jamone L, Ugur E, Cangelosi A, Fadiga L, Bernardino A, Piater J, Santos-Victor J (2016) Affordances in psychology, neuroscience, and robotics: a survey. IEEE Trans Cogn Dev Syst 10(1):4–25CrossRef Jamone L, Ugur E, Cangelosi A, Fadiga L, Bernardino A, Piater J, Santos-Victor J (2016) Affordances in psychology, neuroscience, and robotics: a survey. IEEE Trans Cogn Dev Syst 10(1):4–25CrossRef
19.
Zurück zum Zitat Konidaris G, Barto AG (2009) Skill discovery in continuous reinforcement learning domains using skill chaining. Adv Neural Inf Process Syst 22:1015–1023 Konidaris G, Barto AG (2009) Skill discovery in continuous reinforcement learning domains using skill chaining. Adv Neural Inf Process Syst 22:1015–1023
20.
Zurück zum Zitat Kulkarni TD, Narasimhan K, Saeedi A, Tenenbaum J (2016) Hierarchical deep reinforcement learning: integrating temporal abstraction and intrinsic motivation. In: Lee D, Sugiyama M, Luxburg U, Guyon I, Garnett R (eds) Advances in neural information processing systems, vol 29. Curran Associates, Inc., pp 3675–3683 Kulkarni TD, Narasimhan K, Saeedi A, Tenenbaum J (2016) Hierarchical deep reinforcement learning: integrating temporal abstraction and intrinsic motivation. In: Lee D, Sugiyama M, Luxburg U, Guyon I, Garnett R (eds) Advances in neural information processing systems, vol 29. Curran Associates, Inc., pp 3675–3683
22.
Zurück zum Zitat Manoury A, Nguyen SM, Buche C (2019) Hierarchical affordance discovery using intrinsic motivation. In: Proceedings of the 7th international conference on human-agent interaction, HAI '19,Kyoto, Japan. Association for Computing Machinery, New York, pp 186–193 Manoury A, Nguyen SM, Buche C (2019) Hierarchical affordance discovery using intrinsic motivation. In: Proceedings of the 7th international conference on human-agent interaction, HAI '19,Kyoto, Japan. Association for Computing Machinery, New York, pp 186–193
24.
Zurück zum Zitat Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, Petersen S, Beattie C, Sadik A, Antonoglou I, King H, Kumaran D, Wierstra D, Legg S, Hassabis D (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533. https://doi.org/10.1038/nature14236CrossRef Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, Petersen S, Beattie C, Sadik A, Antonoglou I, King H, Kumaran D, Wierstra D, Legg S, Hassabis D (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533. https://​doi.​org/​10.​1038/​nature14236CrossRef
25.
Zurück zum Zitat Mnih V, Badia AP, Mirza M, Graves A, Lillicrap TP, Harley T, Silver D, Kavukcuoglu K (2016) Asynchronous methods for deep reinforcement learning. CoRR abs/1602.01783. arxiv:1602.01783 Mnih V, Badia AP, Mirza M, Graves A, Lillicrap TP, Harley T, Silver D, Kavukcuoglu K (2016) Asynchronous methods for deep reinforcement learning. CoRR abs/1602.01783. arxiv:​1602.​01783
30.
Zurück zum Zitat Nguyen SM, Ivaldi S, Lyubova N, Droniou A, Gerardeaux-Viret D, Filliat D, Padois V, Sigaud O, Oudeyer PY (2013) Learning to recognize objects through curiosity-driven manipulation with the iCub humanoid robot. In: IEEE international conference on development and learning - Epirob, No 1–8. https://doi.org/10.1109/DevLrn.2013.6652525 Nguyen SM, Ivaldi S, Lyubova N, Droniou A, Gerardeaux-Viret D, Filliat D, Padois V, Sigaud O, Oudeyer PY (2013) Learning to recognize objects through curiosity-driven manipulation with the iCub humanoid robot. In: IEEE international conference on development and learning - Epirob, No 1–8. https://​doi.​org/​10.​1109/​DevLrn.​2013.​6652525
32.
Zurück zum Zitat Rafols E, Koop A, Sutton RS (2006) Temporal abstraction in temporal-difference networks. In: Weiss Y, Schölkopf B, Platt J (eds) Advances in neural information processing systems, vol 18. MIT Press, Cambridge, pp 1313–1320 Rafols E, Koop A, Sutton RS (2006) Temporal abstraction in temporal-difference networks. In: Weiss Y, Schölkopf B, Platt J (eds) Advances in neural information processing systems, vol 18. MIT Press, Cambridge, pp 1313–1320
Metadaten
Titel
Robots Learn Increasingly Complex Tasks with Intrinsic Motivation and Automatic Curriculum Learning
Domain Knowledge by Emergence of Affordances, Hierarchical Reinforcement and Active Imitation Learning
verfasst von
Sao Mai Nguyen
Nicolas Duminy
Alexandre Manoury
Dominique Duhaut
Cedric Buche
Publikationsdatum
27.02.2021
Verlag
Springer Berlin Heidelberg
Erschienen in
KI - Künstliche Intelligenz / Ausgabe 1/2021
Print ISSN: 0933-1875
Elektronische ISSN: 1610-1987
DOI
https://doi.org/10.1007/s13218-021-00708-8

Weitere Artikel der Ausgabe 1/2021

KI - Künstliche Intelligenz 1/2021 Zur Ausgabe

Premium Partner