Abstract
Unlabeled data tells us how the instances from all the classes, mixed together, are distributed. If we know how the instances from each class are distributed, we may decompose the mixture into individual classes. This is the idea behind mixture models. In this chapter, we formalize the idea of mixture models for semi-supervised learning. First we review some concepts in probabilistic modeling. Readers familiar with machine learning can skip to Section 3.2.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Zhu, X., Goldberg, A.B. (2009). Mixture Models and EM. In: Introduction to Semi-Supervised Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Springer, Cham. https://doi.org/10.1007/978-3-031-01548-9_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-01548-9_3
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-00420-9
Online ISBN: 978-3-031-01548-9
eBook Packages: Synthesis Collection of Technology (R0)eBColl Synthesis Collection 2