Dialogue act classification is an important step in understanding students’ utterances within tutorial dialogue systems. Machine-learned models of dialogue act classification hold great promise, and among these, unsupervised dialogue act classifiers have the great benefit of eliminating the human dialogue act annotation effort required to label corpora. In contrast to traditional evaluation approaches which judge unsupervised dialogue act classifiers by accuracy on manual labels, we present results of a study to evaluate the performance of these models with respect to their performance within end-to-end system evaluation. We compare two versions of the tutorial dialogue system for introductory computer science: one that relies on a supervised dialogue act classifier and one that depends on an unsupervised dialogue act classifier. A study with 51 students shows that both versions of the system achieve similar learning gains and user satisfaction. Additionally, we show that some incoming student characteristics are highly correlated with students’ perceptions of their experience during tutoring. This first end-to-end evaluation of an unsupervised dialogue act classifier within a tutorial dialogue system serves as a step toward acquiring tutorial dialogue management models in a fully automated, scalable way.
Weitere Kapitel dieses Buchs durch Wischen aufrufen
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
- A Tutorial Dialogue System for Real-Time Evaluation of Unsupervised Dialogue Act Classifiers: Exploring System Outcomes
Kristy Elizabeth Boyer