skip to main content
10.1145/1557019.1557060acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Issues in evaluation of stream learning algorithms

Published:28 June 2009Publication History

ABSTRACT

Learning from data streams is a research area of increasing importance. Nowadays, several stream learning algorithms have been developed. Most of them learn decision models that continuously evolve over time, run in resource-aware environments, detect and react to changes in the environment generating data. One important issue, not yet conveniently addressed, is the design of experimental work to evaluate and compare decision models that evolve over time. There are no golden standards for assessing performance in non-stationary environments. This paper proposes a general framework for assessing predictive stream learning algorithms. We defend the use of Predictive Sequential methods for error estimate - the prequential error. The prequential error allows us to monitor the evolution of the performance of models that evolve over time. Nevertheless, it is known to be a pessimistic estimator in comparison to holdout estimates. To obtain more reliable estimators we need some forgetting mechanism. Two viable alternatives are: sliding windows and fading factors. We observe that the prequential error converges to an holdout estimator when estimated over a sliding window or using fading factors. We present illustrative examples of the use of prequential error estimators, using fading factors, for the tasks of: i) assessing performance of a learning algorithm; ii) comparing learning algorithms; iii) hypothesis testing using McNemar test; and iv) change detection using Page-Hinkley test. In these tasks, the prequential error estimated using fading factors provide reliable estimators. In comparison to sliding windows, fading factors are faster and memory-less, a requirement for streaming applications. This paper is a contribution to a discussion in the good-practices on performance assessment when learning dynamic models that evolve over time.

Skip Supplemental Material Section

Supplemental Material

p329-gama.mp4

mp4

127.4 MB

References

  1. B. Babcock, M. Datar, R. Motwani, and L. O'Callaghan. Maintaining variance and k-medians over data stream windows. In Proc. of the 22nd Symposium on Principles of Database Systems, pages 234--243. ACM Press, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Michele Basseville and Igor Nikiforov. Detection of Abrupt Changes: Theory and Applications. Prentice-Hall Inc, 1987. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. C. Blake, E. Keogh, and C.J. Merz. UCI repository of Machine Learning Databases, 1999.Google ScholarGoogle Scholar
  4. Gladys Castillo and João Gama. Bias management of bayesian network classifiers. In A. Hoffmann, H. Motoda, and T. Scheffer, editors, Discovery Science, Proceedings of 8th International Conference, pages 70--83. LNAI 3735, Springer Verlag, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. H. Chernoff. A measure of asymptotic efficiency for tests of a hypothesis based on the sums of observations. Annals of Mathematical Statistics, 23:493--507, 1952.Google ScholarGoogle ScholarCross RefCross Ref
  6. Graham Cormode, S. Muthukrishnan, and Wei Zhuang. Conquering the divide: Continuous clustering of distributed data streams. In ICDE, pages 1036--1045, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  7. A. P. Dawid. Statistical theory: The prequential approach. Journal of the Royal Statistical Society-A, 147:278--292, 1984.Google ScholarGoogle ScholarCross RefCross Ref
  8. Janez Demsar. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7:1--30, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. T. Dietterich. Approximate statistical tests for comparing supervised classification learning algorithms. Corvallis, technical report nr. 97.331, Oregon State University, 1996.Google ScholarGoogle Scholar
  10. Pedro Domingos and Geoff Hulten. Mining High-Speed Data Streams. In Ismail Parsa, Raghu Ramakrishnan, and Sal Stolfo, editors, Proceedings of the ACM Sixth International Conference on Knowledge Discovery and Data Mining, pages 71--80. ACM Press, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. F.J. Ferrer-Troyano, J.S. Aguilar-Ruiz, and J.C. Riquelme. Incremental rule learning and border examples selection from numerical data streams. Journal of Universal Computer Science, 11(8):1426--1439, 2005.Google ScholarGoogle Scholar
  12. Francisco Ferrer-Troyano, Jesus S. Aguilar-Ruiz, and Jose C. Riquelme. Discovering decision rules from numerical data streams. In Proceedings of the 2004 ACM symposium on Applied computing, pages 649--653. ACM Press, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. João Gama, Pedro Medas, Gladys Castillo, and Pedro Rodrigues. Learning with drift detection. In Ana L. C. Bazzan and Sofiane Labidi, editors, Advances in Artificial Intelligence - SBIA 2004, volume 3171 of Lecture Notes in Computer Science, pages 286--295. Springer Verlag, October 2004.Google ScholarGoogle Scholar
  14. João Gama, Pedro Medas, and Ricardo Rocha. Forest trees for on-line data. In Proceedings of the 2004 ACM symposium on Applied computing, pages 632--636. ACM Press, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. João Gama, Ricardo Rocha, and Pedro Medas. Accurate decision trees for mining high-speed data streams. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 523--528. ACM Press, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. B. Ghosh and P. Sen. Handbook of Sequential Analysis. Narcel Dekker, 1991.Google ScholarGoogle Scholar
  17. W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):13--30, 1963.Google ScholarGoogle ScholarCross RefCross Ref
  18. Geoff Hulten and Pedro Domingos. Catching up with the data: research issues in mining data streams. In Proc. of Workshop on Research issues in Data Mining and Knowledge Discovery, 2001.Google ScholarGoogle Scholar
  19. Geoff Hulten and Pedro Domingos. VFML - a toolkit for mining high-speed time-changing data streams. http://www.cs.washington.edu/dm/vfml/. 2003.Google ScholarGoogle Scholar
  20. Geoff Hulten, Laurie Spencer, and Pedro Domingos. Mining time-changing data streams. In Proceedings of the 7th ACM SIGKDD International conference on Knowledge discovery and data mining, pages 97--106. ACM Press, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Richard Kirkby. Improving Hoeffding Trees. PhD thesis, University of Waikato - New Zealand, 2008.Google ScholarGoogle Scholar
  22. Ralf Klinkenberg. Learning drifting concepts: Example selection vs. example weighting. Intelligent Data Analysis, 8(3):281--300, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. I. Koychev. Gradual forgetting for adaptation to concept drift. In Proceedings of ECAI 2000 Workshop Current Issues in Spatio-Temporal Reasoning. Berlin, Germany, pages 101--106, 2000.Google ScholarGoogle Scholar
  24. I. Mierswa, M. Wurst, R. Klinkenberg, M. Scholz, and T. Euler. Yale: Rapid prototyping for complex data mining tasks. In ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, pages 935--940. ACM Press, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. H. Mouss, D. Mouss, N. Mouss, and L. Sefouhi. Test of Page-Hinkley, an approach for fault detection in an agro-alimentary production system. In Proceedings of the 5th Asian Control Conference, volume 2, pages 815--818, 2004.Google ScholarGoogle Scholar
  26. E. S. Page. Continuous inspection schemes. Biometrika, 41(1/2):100--115, 1954.Google ScholarGoogle ScholarCross RefCross Ref
  27. W. Nick Street and YongSeog Kim. A streaming ensemble algorithm SEA for large-scale classification. In Proc. seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 377--382. ACM Press, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Gerhard Widmer and Miroslav Kubat. Learning in the presence of concept drift and hidden contexts. Machine Learning, 23:69--101, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Issues in evaluation of stream learning algorithms

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      KDD '09: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
      June 2009
      1426 pages
      ISBN:9781605584959
      DOI:10.1145/1557019

      Copyright © 2009 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 28 June 2009

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate1,133of8,635submissions,13%

      Upcoming Conference

      KDD '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader