This paper presents a method, which enables a robot to extract demonstrator’s key motions based on imitation learning through unsegmented human motion. When a robot learns another’s motions from unsegmented time series, the robot has to find what he learns from the continuous motion. The learning architecture is developed mainly based on a switching autoregressive model (SARM), a simple phrase extraction method, and singular vector decomposition to discriminate key motions. In most previous research on methods of imitation learning by autonomous robots, target motions that were given to robots were segmented into several meaningful parts by the experimenters in advance. However, to imitate certain behaviors from the continuous motion of a person, the robot needs to find segments that should be learned. In our approach, the learning architecture converts the continuous time series into a discrete time series of letters by using SARM, finds candidates of key motions by using a simple phrase extractor which utilizes n-gram statistics, and removes meaningless segments from the keywords by utilizing singular vector decomposition (SVD) to achieve this goal,. In our experiment, a demonstrator displayed several unsegmented motions to a robot. The results revealed that the framework enabled the robot to obtain several prepared key motions.
Swipe to navigate through the chapters of this book
Please log in to get access to this content
To get access to this content you need the following product:
- Imitation Learning from Unsegmented Human Motion Using Switching Autoregressive Model and Singular Vector Decomposition
- Springer Berlin Heidelberg
- Sequence number
Neuer Inhalt/© ITandMEDIA