- 1.M. Craven, D. Freitag, A. McCallum, T. Mitchell, K. Nigam, artd C.Y. Quek. Learning to extract symbolic knowledge from the world wide web. Technical report, Carnegie Mellon University, January 1997.Google Scholar
- 2.S. E. Decatur. PAC learning with constantpartition classification noise and applications to decision tree induction. In Proceedings of the Fourteenth International Conference on Machine Learnrag, pages 83 -91, July 1997. Google ScholarDigital Library
- 3.A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39:1 38, 1!)77.Google Scholar
- 4.Richard O. Duda and Peter E. Hart. Pattern Classification and Scene Analysis. Wiley, 1973.Google Scholar
- 5.Z. Ghahramani and M. I. Jordan. Supervised learning from incomplete data via an EM approach. In Advances in Neural Information Processing Systems (NIPS 6). Morgan Kauffman, 1994.Google Scholar
- 6.S. A. Goldman and M. J. Kearns. On the complexi ty of teaching. Journal of Computer and System Sciences, 50({):20-31, February 1995. Google ScholarDigital Library
- 7.A.G. Hauptmann and M.J. Witbrock. Informedia: News-on-demand - multimedia information acquisition and retrieval. In M. Maybury, editor, Intelligent Multimedia Information Retrieval, 1997. Google ScholarDigital Library
- 8.J. Jackson and A. Tomkins. A computational model of teaching. In Proceedings of the Fifth Annual Workshop on Uomputational Learning Theo ry, pages 319 326. Morgan Kaufmann, 1992. Google ScholarDigital Library
- 9.D. R. Karger. Random sampling in cut, flow, and network design problems. In Proceedings of the Twent:q-Si:rth Anrrual ACM Symposium on the Theory of Computing, pages 648-657, May 1994. Google ScholarDigital Library
- 10.D. R. Karger. Randonl sampling in cut, flow, and network design problems. Journal version draft, 1997.Google Scholar
- 11.M. Kearns. Efficient noise-tolerant learning from statistical queries. In Proceedings of the Twenty- Fifth Annual A CM ,qym. posium on Theory of Computing, pages 392-40l. 1993. Google ScholarDigital Library
- 12.D. D. Lewis and M. Ringuette. A comparison of two learning algorithms for text categorization. In Third Annual Symposium on Document Analysis and Information Retrieval, pages 81-93, 1994.Google Scholar
- 13.Joel Ratsaby and Santosh S. Venkatesh. Learning from a mixture of labeled and unlabeled examples with parametric side information. In Proceedings of the 8th Annual Conference on Computational Learning Theory, pages 412-417. ACM Press, New York, NY, 1995. Google ScholarDigital Library
- 14.M.J. Witbrock and A.G. Hauptmann. Improving acoustic models by watching television. Technical Report CMU-CS-98-110, Carnegie Mellon University, March 19 1998.Google ScholarCross Ref
- 15.D. Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedlags of the SSrd Annual Meeting of the Association for Computational Linguistics, pages 189-196, 1995. Google ScholarDigital Library
Index Terms
- Combining labeled and unlabeled data with co-training
Recommendations
Training object detectors from few weakly-labeled and many unlabeled images
Highlights- A novel method to train detector by few weakly-labeled images and lots of unlabeled images.
AbstractWeakly-supervised object detection attempts to limit the amount of supervision by dispensing the need for bounding boxes, but still assumes image-level labels on the entire training set. In this work, we study the problem of training ...
Learning Instance Weighted Naive Bayes from labeled and unlabeled data
In real-world data mining applications, it is often the case that unlabeled instances are abundant, while available labeled instances are very limited. Thus, semi-supervised learning, which attempts to benefit from large amount of unlabeled data ...
Combining labeled and unlabeled data with word-class distribution learning
CIKM '09: Proceedings of the 18th ACM conference on Information and knowledge managementWe describe a novel simple and highly scalable semi-supervised method called Word-Class Distribution Learning (WCDL), and apply it task of information extraction (IE) by utilizing unlabeled sentences to improve supervised classification methods. WCDL ...
Comments