ABSTRACT
We present PLIANT, a learning system that supports adaptive assistance in an open calendaring system. PLIANT learns user preferences from the feedback that naturally occurs during interactive scheduling. It contributes a novel application of active learning in a domain where the choice of candidate schedules to present to the user must balance usefulness to the learning module with immediate benefit to the user. Our experimental results provide evidence of PLIANT's ability to learn user preferences under various conditions and reveal the tradeoffs made by the different active learning selection strategies.
- Berry, P. M, Gervasio, M., Uribe, T. E., Myers, K., & Nitz, K. (2004). A personalized calendar assistant. Working Notes of the AAAI Spring Symposium on Interaction between Humans and Autonomous Systems over Extended Operation.]]Google Scholar
- Cohn, D., Ghaharamani, Z., & Jordan, M. I. (1994). Active learning with statistical models. Advances in Neural Information Processing Systems 7.]]Google Scholar
- Cristianini, N. & Shawe-Taylor, J. (2000). An Introduction to Support Vector Machines. Cambridge University Press.]] Google ScholarDigital Library
- Engelbrecht, A. P. (2001). Selective learning for multilayer feedforward neural networks. Fundamenta Informaticae, 45:295--328.]]Google ScholarDigital Library
- Ephrati, E., Zlotkin, G., & Rosenschein, J.S. (1994). A non manipulable meeting scheduling system. Proceedings of the Thirteenth International Distributed Artificial Intelligence Workshop. Seattle, WA.]]Google Scholar
- Fiechter, C.-N. & Rogers, S. (2000). Learning subjective functions with large margins. Proceedings of the Seventeenth International Conference on Machine Learning, 287--294.]] Google ScholarDigital Library
- Fung, G., Mangasarian, O. L., & Shavlik, J. (2002). Knowledge-based support vector machine classifiers. Advances in Neural Information Processing Systems 15.]]Google Scholar
- Gervasio, M., Iba, W., & Langley, P. (1999). Learning user evaluation functions for adaptive scheduling assistance. Proceedings of the Sixteenth International Conference on Machine Learning, 152--161.]] Google ScholarDigital Library
- Gratch, J. & Chien, S. (1996). Adaptive problem-solving for large-scale scheduling problems: a case study. Journal of Artificial Intelligence Research, 4:365--396.]] Google ScholarDigital Library
- Joachims, T. (2002). Optimizing search engines using clickthrough data. Proceedings of the ACM Conference on Knowledge Discovery and Data Mining.]] Google ScholarDigital Library
- Kiefer, J. (1959). Optimum experimental designs. J. R. Stat. Soc., series B, 21:272--304.]]Google Scholar
- MacKay, D. J. C. (1992). Information-based objective functions for active data selection. Neural Computation, 4 (4): 590--604.]] Google ScholarDigital Library
- Mitchell, T. M., Caruana, R., Freitag, D., McDermott, J., & Zabowski, D. (1994). Experience with a learning personal assistant. Communications of the ACM 37 (7):80--91.]] Google ScholarDigital Library
- Miyashita, K. & Sycara, K. (1995). CABINS: a framework of knowledge acquisition and iterative revision for schedule improvement and reactive repair. Artificial Intelligence, 76 (1--2).]] Google ScholarDigital Library
- Morley, D. & Myers, K. (2004). The SPARK agent framework. Proceedings of the Third International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS-04).]] Google ScholarDigital Library
- Murphy, K. P. (2003). Active learning of causal Bayes net structure. Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 426--435.]]Google Scholar
- Sandip, S. & Durfee, E.H. (1998). A formal study of distributed meeting scheduling. Group Decision and Negotiation, 7:265--298.]]Google ScholarCross Ref
- Tong, S. & Koller, D. (2000). Support vector machine active learning with applications to text classification. Proceedings of the Seventeenth International Conference on Machine Learning.]] Google ScholarDigital Library
- Thrun, S.B. (2002). The role of exploration in learning control. Neural Networks, 15 (4):665--687.]]Google Scholar
- Zhang, W. & Dietterich, T. (1995). A reinforcement learning approach to job-shop scheduling. Proceedings of the 14th International Joint Conference on Artificial Intelligence.]] Google ScholarDigital Library
Index Terms
- Active preference learning for personalized calendar scheduling assistance
Recommendations
Entropy-Driven online active learning for interactive calendar management
IUI '07: Proceedings of the 12th international conference on Intelligent user interfacesWe present a new algorithm for active learning embedded within an interactive calendar management system that learns its users' scheduling preferences. When the system receives a meeting request, the active learner selects a set of alternative solutions ...
Efficiently learning the preferences of people
This paper presents a framework for optimizing the preference learning process. In many real-world applications in which preference learning is involved the available training data is scarce and obtaining labeled training data is expensive. Fortunately ...
Active deep Q-learning with demonstration
AbstractReinforcement learning (RL) is a machine learning technique aiming to learn how to take actions in an environment to maximize some kind of reward. Recent research has shown that although the learning efficiency of RL can be improved with expert ...
Comments