skip to main content
10.1145/2696454.2696474acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Interactive Hierarchical Task Learning from a Single Demonstration

Published:02 March 2015Publication History

ABSTRACT

We have developed learning and interaction algorithms to support a human teaching hierarchical task models to a robot using a single demonstration in the context of a mixed-initiative interaction with bi-directional communication. In particular, we have identified and implemented two important heuristics for suggesting task groupings based on the physical structure of the manipulated artifact and on the data flow between tasks. We have evaluated our algorithms with users in a simulated environment and shown both that the overall approach is usable and that the grouping suggestions significantly improve the learning and interaction.

References

  1. B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5):469--483, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. M. Cakmak and A. L. Thomaz. Designing robot learners that ask good questions. In ACM/IEEE International Conference on Human-Robot Interaction, pages 17--24. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S. Chernova and M. Veloso. Interactive policy learning through confidence-based autonomy. Journal of Artificial Intelligence Research, 34(1):1, 2009. Google ScholarGoogle ScholarCross RefCross Ref
  4. A. Garland, K. Ryall, and C. Rich. Learning hierarchical task models by defining and refining examples. In International Conference on Knowledge Capture, pages 44--51, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. B. J. Grosz and C. L. Sidner. Attention, intentions, and the structure of discourse. Comput. Linguist., 12(3):175--204, July 1986. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. B. Hayes. Social hierarchical learning. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2013) Pioneers Workshop, 2013.Google ScholarGoogle Scholar
  7. B. Hayes and B. Scassellati. Discovering task constraints through observation and active learning. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  8. S. B. Huffman and J. E. Laird. Flexibly instructable agents. Journal of Artificial Intelligence Research, 3:271--324, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. S. Mohan and J. E. Laird. Towards situated, interactive, instructable agents in a cognitive architecture. In AAAI Fall Symposium Series, 2011.Google ScholarGoogle Scholar
  10. A. Mohseni-Kabir, S. Chernova, and C. Rich. Collaborative learning of hierarchical task networks from demonstration and instruction. In RSS Workshop on Human-Robot Collaboration for Industrial Manufacturing, Berkeley, CA, July 2014.Google ScholarGoogle Scholar
  11. A. Mohseni-Kabir, C. Rich, and S. Chernova. Learning partial ordering constraints from a single demonstration. In ACM/IEEE International Conference on Human-robot interaction, pages 248--249, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. M. N. Nicolescu and M. J. Mataric. Natural methods for robot task learning: Instructive demonstrations, generalization and practice. In AAMAS, pages 241--248, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. C. Rich. Building task-based user interfaces with ANSI/CEA-2018. IEEE Computer, 42(8):20--27, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. C. Rich and C. L. Sidner. Using collaborative discourse theory to partially automate dialogue tree authoring. In Proc. Int. Conf. on Intelligent Virtual Agents, pages 327--340, Santa Cruz, CA, Sept. 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. P. E. Rybski, K. Yoon, J. Stolarz, and M. M. Veloso. Interactive robot task training through dialog and demonstration. In ACM/IEEE Int. Conf. on Human-Robot Interaction, pages 49--56, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. H. Veeraraghavan and M. Veloso. Learning task specific plans through sound and visually interpretable demonstrations. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2599--2604, 2008.Google ScholarGoogle Scholar

Index Terms

  1. Interactive Hierarchical Task Learning from a Single Demonstration

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          HRI '15: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction
          March 2015
          368 pages
          ISBN:9781450328838
          DOI:10.1145/2696454

          Copyright © 2015 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 2 March 2015

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          HRI '15 Paper Acceptance Rate43of169submissions,25%Overall Acceptance Rate242of1,000submissions,24%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader