ABSTRACT
Graph-based methods for semi-supervised learning have recently been shown to be promising for combining labeled and unlabeled data in classification problems. However, inference for graph-based methods often does not scale well to very large data sets, since it requires inversion of a large matrix or solution of a large linear program. Moreover, such approaches are inherently transductive, giving predictions for only those points in the unlabeled set, and not for an arbitrary test point. In this paper a new approach is presented that preserves the strengths of graph-based semi-supervised learning while overcoming the limitations of scalability and non-inductive inference, through a combination of generative mixture models and discriminative regularization using the graph Laplacian. Experimental results show that this approach preserves the accuracy of purely graph-based transductive methods when the data has "manifold structure," and at the same time achieves inductive learning with significantly reduced computational cost.
- Belkin, M., Matveeva, I., & Niyogi, P. (2004a). Regularization and semi-supervised learning on large graphs. COLT.Google Scholar
- Belkin, M., Niyogi, P., & Sindhwani, V. (2004b). Manifold regularization: A geometric framework for learning from examples (Technical Report TR-2004-06). University of Chicago.Google Scholar
- Blum, A., & Chawla, S. (2001). Learning from labeled and unlabeled data using graph mincuts. Proc. 18th International Conf. on Machine Learning. Google ScholarDigital Library
- Castelli, V., & Cover, T. (1996). The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter. IEEE Transactions on Information Theory, 42, 2101--2117. Google ScholarCross Ref
- Cozman, F., Cohen, I., & Cirelo, M. (2003). Semi-supervised learning of mixture models. ICML-03, 20th International Conference on Machine Learning.Google Scholar
- Delalleau, O., Bengio, Y., & Roux, N. L. (2005). Efficient non-parametric function induction in semi-supervised learning. Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics (AISTAT 2005).Google Scholar
- Fowlkes, C., Belongie, S., Chung, F., & Malik, J. (2004). Spectral grouping using the Nyströöm method. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26, 214--225. Google ScholarDigital Library
- Krishnapuram, B., Williams, D., Xue, Y., Hartemink, A., Carin, L., & Figueiredo, M. (2005). On semi-supervised classification. In L. K. Saul, Y. Weiss and L. Bottou (Eds.), Advances in neural information processing systems 17. Cambridge, MA: MIT Press.Google Scholar
- Nigam, K., McCallum, A. K., Thrun, S., & Mitchell, T. (2000). Text classification from labeled and unlabeled documents using EM. Machine Learning, 39, 103--134. Google ScholarDigital Library
- Ratsaby, J., & Venkatesh, S. (1995). Learning from a mixture of labeled and unlabeled examples with parametric side information. Proceedings of the Eighth Annual Conference on Computational Learning Theory, 412--417. Google ScholarDigital Library
- Weinberger, K. Q., Packer, B. D., & Saul, L. K. (2005). Nonlinear dimensionality reduction by semidefinite programming and kernel matrix factorization. Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics (AISTAT 2005).Google Scholar
- Weinberger, K. Q., Sha, F., & Saul, L. K. (2004). Learning a kernel matrix for nonlinear dimensionality reduction. Proceedings of ICML-04 (pp. 839--846). Google ScholarDigital Library
- Zhou, D., Bousquet, O., Lal, T., Weston, J., & Schlkopf, B. (2004). Learning with local and global consistency. Advances in Neural Information Processing System 16.Google Scholar
- Zhu, X., Ghahramani, Z., & Lafferty, J. (2003). Semi-supervised learning using Gaussian fields and harmonic functions. ICML-03, 20th International Conference on Machine Learning.Google Scholar
- Zhu, X., Kandola, J., Ghahramani, Z., & Lafferty, J. (2005). Nonparametric transforms of graph kernels for semi-supervised learning. In L. K. Saul, Y. Weiss and L. Bottou (Eds.), Advances in neural information processing systems 17. Cambridge, MA: MIT Press.Google Scholar
Recommendations
Soft constraint harmonic energy minimization for transductive learning and its two interpretations
Using the labeled and unlabeled data to enhance the performance of classification is the core idea of transductive learning, It has recently attracted much interest of researchers on this topic. In this paper, we extend the harmonic energy minimization ...
Semi-supervised dimensionality reduction via harmonic functions
MDAI'11: Proceedings of the 8th international conference on Modeling decisions for artificial intelligenceTraditional unsupervised dimensionality reduction techniques are widely used in many learning tasks, such as text classification and face recognition. However, in many applications, a few labeled examples are readily available. Thus, semi-supervised ...
Conditional bernoulli mixtures for multi-label classification
ICML'16: Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48Multi-label classification is an important machine learning task wherein one assigns a subset of candidate labels to an object. In this paper, we propose a new multi-label classification method based on Conditional Bernoulli Mixtures. Our proposed ...
Comments