ABSTRACT
The effectiveness of knowledge transfer using classification algorithms depends on the difference between the distribution that generates the training examples and the one from which test examples are to be drawn. The task can be especially difficult when the training examples are from one or several domains different from the test domain. In this paper, we propose a locally weighted ensemble framework to combine multiple models for transfer learning, where the weights are dynamically assigned according to a model's predictive power on each test example. It can integrate the advantages of various learning algorithms and the labeled information from multiple training domains into one unified classification model, which can then be applied on a different domain. Importantly, different from many previously proposed methods, none of the base learning method is required to be specifically designed for transfer learning. We show the optimality of a locally weighted ensemble framework as a general approach to combine multiple models for domain transfer. We then propose an implementation of the local weight assignments by mapping the structures of a model onto the structures of the test domain, and then weighting each model locally according to its consistency with the neighborhood structure around the test example. Experimental results on text classification, spam filtering and intrusion detection data sets demonstrate significant improvements in classification accuracy gained by the framework. On a transfer learning task of newsgroup message categorization, the proposed locally weighted ensemble framework achieves 97% accuracy when the best single model predicts correctly only on 73% of the test examples. In summary, the improvement in accuracy is over 10% and up to 30% across different problems.
- C.G. Atkeson, A.W. Moore, and S.Schaal. Locally weighted learning. Artificial Intelligence Review, 11(1-5):11--73, 1997. Google ScholarDigital Library
- E. Bauer and R. Kohavi. An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning, 36:105--139, 2004. Google ScholarDigital Library
- S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira. Analysis of representations for domain adaptation. In Proc. of NIPS' 07, pages 137--144. 2007.Google Scholar
- P.N. Bennett, S.T. Dumais, and E.Horvitz. The combination of text classifiers using reliability indicators. Information Retrieval, 8(1):67--100, 2005. Google ScholarDigital Library
- S. Bickel, M. Bruckner, and T. Scheffer. Discriminative learning for differing training and test distributions. In Proc. of ICML' 07, pages 81--88, 2007. Google ScholarDigital Library
- A.J. Carlson, C.M. Cumby, J.L.R. Nicholas D.Rizzolo, and D.Roth. Snow learning architecture. http://l2r.cs.uiuc.edu/~cogcomp/asoftware.php?skey=SNOW#projects.Google Scholar
- R. Caruana. Multitask learning. Machine Learning, 28(1):41--75, 1997. Google ScholarDigital Library
- C.-C. Chang and C.-J. Lin. Libsvm: a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.Google Scholar
- W. Dai, G.-R. Xue, Q. Yang, and Y. Yu. Co-clustering based classification for out-of-domain documents. In Proc. of KDD' 07, pages 210--219, 2007. Google ScholarDigital Library
- W. Dai, Q. Yang, G.-R. Xue, and Y. Yu. Boosting for transfer learning. In Proc. of ICML' 07, pages 193--200. Google ScholarDigital Library
- H. Daumé and D. Marcu. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 26:101--126, 2006. Google ScholarCross Ref
- T. Dietterich. Ensemble methods in machine learning. In Proc. of MCS '00, pages 1--15, 2000. Google ScholarDigital Library
- W. Fan. Systematic data selection to mine concept-drifting data streams. In Proc. KDD' 04, pages 128--137, 2004. Google ScholarDigital Library
- W. Fan and I. Davidson. On sample selection bias and its efficient correction via model averaging and unlabeled examples. In Proc. of SDM'07.Google Scholar
- J. Gao, W. Fan, and J. Han. On appropriate assumptions to mine data streams: Analysis and practice. In Proc. ICDM' 07, pages 143--152, 2007. Google ScholarDigital Library
- A. Genkin, D. D. Lewis, and D. Madigan. Bbr: Bayesian logistic regression software. http://stat.rutgers.edu/~madigan/BBR/.Google Scholar
- J. Hoeting, D. Madigan, A. Raftery, and C. Volinsky. Bayesian model averaging: a tutorial. Statist. Sci., 14:382--417, 1999.Google ScholarCross Ref
- J. Huang, A. J. Smola, A. Gretton, K. M. Borgwardt, and B. Scholkopf. Correcting sample selection bias by unlabeled data. In Proc. of NIPS' 06, pages 601--608. 2007.Google Scholar
- R. Jacobs, M. Jordan, S. Nowlan, and G. Hinton. Adaptive mixtures of local experts. Neural Computation, 3(1):79--87, 1991. Google ScholarCross Ref
- T. Joachims. Making large-scale svm learning practical. advances in kernel methods- support vector learning. MIT-Press, 1999. Google ScholarDigital Library
- G. Karypis. Cluto - family of data clustering software tools. http://glaros.dtc.umn.edu/gkhome/views/cluto.Google Scholar
- X. Li and J. Bilmes. A Bayesian divergence prior for classifier adaptation. In Proc. of AISTATS' 07, 2007.Google Scholar
- D.M. Roy and L.P. Kaelbling. Efficient bayesian task-level transfer learning. In Proc. of IJCAI '07. Google ScholarDigital Library
- S. Satpal and S. Sarawagi. Domain adaptation of conditional probability models via feature subsetting. In Proc. of ECML/PKDD' 07, pages 224--235, 2007.Google ScholarCross Ref
- H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2):227--244, 2000.Google ScholarCross Ref
- A. Storkey and M. Sugiyama. Mixture regression for covariate shift. In Proc. of NIPS' 06, pages 1337--1344.Google Scholar
- H. Wang, W. Fan, P. Yu, and J. Han. Mining concept-drifting data streams using ensemble classifiers. In Proc. of KDD'03, pages 226--235, 2003. Google ScholarDigital Library
- X. Zhu. Semi-supervised learning literature survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison, 2005.Google Scholar
Index Terms
- Knowledge transfer via multiple model local structure mapping
Recommendations
Transfer of Pretrained Model Weights Substantially Improves Semi-supervised Image Classification
AI 2020: Advances in Artificial IntelligenceAbstractDeep neural networks produce state-of-the-art results when trained on a large number of labeled examples but tend to overfit when small amounts of labeled examples are used for training. Creating a large number of labeled examples requires ...
STAR: Noisy Semi-Supervised Transfer Learning for Visual Classification
MMSports'21: Proceedings of the 4th International Workshop on Multimedia Content Analysis in SportsSemi-supervised learning (SSL) has proven to be effective at leveraging large-scale unlabeled data to mitigate the dependency on labeled data in order to learn better models for visual recognition and classification tasks. However, recent SSL methods ...
Transfer learning from multiple source domains via consensus regularization
CIKM '08: Proceedings of the 17th ACM conference on Information and knowledge managementRecent years have witnessed an increased interest in transfer learning. Despite the vast amount of research performed in this field, there are remaining challenges in applying the knowledge learnt from multiple source domains to a target domain. First, ...
Comments