Years and Authors of Summarized Original Work
2006; Balcan, Beygelzimer, Langford
2007; Balcan, Broder, Zhang
2007; Hanneke
2013; Urner, Wulff, Ben-David
2014; Awashti, Balcan, Long
Problem Definition
Most classic machine learning methods depend on the assumption that humans can annotate all the data available for training. However, many modern machine learning applications (including image and video classification, protein sequence classification, and speech processing) have massive amounts of unannotated or unlabeled data. As a consequence, there has been tremendous interest both in machine learning and its application areas in designing algorithms that most efficiently utilize the available data while minimizing the need for human intervention. An extensively used and studied technique is active learning, where the algorithm is presented with a large pool of unlabeled examples (such as all images available on the web) and can interactively ask for the labels of examples of its own...
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Recommended Reading
Awasthi P, Balcan M-F, Long PM (2014) The power of localization for efficiently learning linear separators with noise. In: Proceedings of the 46th annual symposium on the theory of computing (STOC), New York
Balcan MF, Beygelzimer A, Langford J (2006) Agnostic active learning. In: Proceedings of the 23rd international conference on machine learning (ICML), Pittsburgh
Balcan M-F, Broder A, Zhang T (2007) Margin based active learning. In: Proceedings of the 20th annual conference on computational learning theory (COLT), San Diego
Balcan M-F, Long PM (2013) Active and passive learning of linear separators under log-concave distributions. In: Proceedings of the 26th conference on learning theory (COLT), Princeton
Beygelzimer A, Hsu D, Langford J, Zhang T (2010) Agnostic active learning without constraints. In: Advances in neural information processing systems (NIPS), Vancouver
Cohn D, Atlas L, Ladner R (1994) Improving generalization with active learning. In: Proceedings of the 11th international conference on machine learning (ICML), New Brunswick
Dasgupta S, Hsu D (2008) Hierarchical sampling for active learning. In: Proceedings of the 25th international conference on machine learning (ICML), Helsinki
Dasgupta S, Hsu DJ, Monteleoni C (2007) A general agnostic active learning algorithm. In: Advances in neural information processing systems (NIPS), Vancouver
Hanneke S (2007) A bound on the label complexity of agnostic active learning. In: Proceedings of the 24th international conference on machine learning (ICML), Corvallis
Kearns MJ, Vazirani UV (1994) An introduction to computational learning theory. MIT, Cambridge
Koltchinskii V (2010) Rademacher complexities and bounding the excess risk in active learning. J Mach Learn 11:2457–2485
Urner R, Wullf S, Ben-David S (2013) Plal: cluster-based active learning. In: Proceedings of the 26th conference on learning theory (COLT), Princeton
Vapnik VN (1998) Statistical learning theory. Wiley, New York
Zhang C, Chaudhuri K (2014) Beyond disagreement-based agnostic active learning. In: Advances in neural information processing systems (NIPS), Montreal
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer Science+Business Media New York
About this entry
Cite this entry
Balcan, MF., Urner, R. (2016). Active Learning – Modern Learning Theory. In: Kao, MY. (eds) Encyclopedia of Algorithms. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-2864-4_769
Download citation
DOI: https://doi.org/10.1007/978-1-4939-2864-4_769
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4939-2863-7
Online ISBN: 978-1-4939-2864-4
eBook Packages: Computer ScienceReference Module Computer Science and Engineering