Skip to main content
main-content
Top

Hint

Swipe to navigate through the articles of this issue

01-12-2014 | Case study | Issue 1/2014 Open Access

Journal of Big Data 1/2014

Comparative study between incremental and ensemble learning on data streams: Case study

Journal:
Journal of Big Data > Issue 1/2014
Authors:
Wenyu Zang, Peng Zhang, Chuan Zhou, Li Guo
Important notes

Electronic supplementary material

The online version of this article (doi:10.​1186/​2196-1115-1-5) contains supplementary material, which is available to authorized users.

Authors’ contributions

WZ and PZ have made substantial contributions to conception and design. WZ has been involved in drafting the manuscript. PZ and CZ revising it critically for important intellectual content; LG has given final approval of the version to be published. All authors read and approved the final manuscript.

Abstract

With unlimited growth of real-world data size and increasing requirement of real-time processing, immediate processing of big stream data has become an urgent problem. In stream data, hidden patterns commonly evolve over time (i.e.,concept drift), where many dynamic learning strategies have been proposed, such as the incremental learning and ensemble learning. To the best of our knowledge, there is no work systematically compare these two methods. In this paper we conduct comparative study between theses two learning methods. We first introduce the concept of “concept drift”, and propose how to quantitatively measure it. Then, we recall the history of incremental learning and ensemble learning, introducing milestones of their developments. In experiments, we comprehensively compare and analyze their performances w.r.t. accuracy and time efficiency, under various concept drift scenarios. We conclude with several future possible research problems.
Supplementary Material
Literature
About this article

Other articles of this Issue 1/2014

Journal of Big Data 1/2014 Go to the issue

Premium Partner

    Image Credits