skip to main content
10.1145/1401890.1401928acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Knowledge transfer via multiple model local structure mapping

Authors Info & Claims
Published:24 August 2008Publication History

ABSTRACT

The effectiveness of knowledge transfer using classification algorithms depends on the difference between the distribution that generates the training examples and the one from which test examples are to be drawn. The task can be especially difficult when the training examples are from one or several domains different from the test domain. In this paper, we propose a locally weighted ensemble framework to combine multiple models for transfer learning, where the weights are dynamically assigned according to a model's predictive power on each test example. It can integrate the advantages of various learning algorithms and the labeled information from multiple training domains into one unified classification model, which can then be applied on a different domain. Importantly, different from many previously proposed methods, none of the base learning method is required to be specifically designed for transfer learning. We show the optimality of a locally weighted ensemble framework as a general approach to combine multiple models for domain transfer. We then propose an implementation of the local weight assignments by mapping the structures of a model onto the structures of the test domain, and then weighting each model locally according to its consistency with the neighborhood structure around the test example. Experimental results on text classification, spam filtering and intrusion detection data sets demonstrate significant improvements in classification accuracy gained by the framework. On a transfer learning task of newsgroup message categorization, the proposed locally weighted ensemble framework achieves 97% accuracy when the best single model predicts correctly only on 73% of the test examples. In summary, the improvement in accuracy is over 10% and up to 30% across different problems.

References

  1. C.G. Atkeson, A.W. Moore, and S.Schaal. Locally weighted learning. Artificial Intelligence Review, 11(1-5):11--73, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. E. Bauer and R. Kohavi. An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning, 36:105--139, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira. Analysis of representations for domain adaptation. In Proc. of NIPS' 07, pages 137--144. 2007.Google ScholarGoogle Scholar
  4. P.N. Bennett, S.T. Dumais, and E.Horvitz. The combination of text classifiers using reliability indicators. Information Retrieval, 8(1):67--100, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S. Bickel, M. Bruckner, and T. Scheffer. Discriminative learning for differing training and test distributions. In Proc. of ICML' 07, pages 81--88, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. A.J. Carlson, C.M. Cumby, J.L.R. Nicholas D.Rizzolo, and D.Roth. Snow learning architecture. http://l2r.cs.uiuc.edu/~cogcomp/asoftware.php?skey=SNOW#projects.Google ScholarGoogle Scholar
  7. R. Caruana. Multitask learning. Machine Learning, 28(1):41--75, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. C.-C. Chang and C.-J. Lin. Libsvm: a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.Google ScholarGoogle Scholar
  9. W. Dai, G.-R. Xue, Q. Yang, and Y. Yu. Co-clustering based classification for out-of-domain documents. In Proc. of KDD' 07, pages 210--219, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. W. Dai, Q. Yang, G.-R. Xue, and Y. Yu. Boosting for transfer learning. In Proc. of ICML' 07, pages 193--200. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. H. Daumé and D. Marcu. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 26:101--126, 2006. Google ScholarGoogle ScholarCross RefCross Ref
  12. T. Dietterich. Ensemble methods in machine learning. In Proc. of MCS '00, pages 1--15, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. W. Fan. Systematic data selection to mine concept-drifting data streams. In Proc. KDD' 04, pages 128--137, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. W. Fan and I. Davidson. On sample selection bias and its efficient correction via model averaging and unlabeled examples. In Proc. of SDM'07.Google ScholarGoogle Scholar
  15. J. Gao, W. Fan, and J. Han. On appropriate assumptions to mine data streams: Analysis and practice. In Proc. ICDM' 07, pages 143--152, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. A. Genkin, D. D. Lewis, and D. Madigan. Bbr: Bayesian logistic regression software. http://stat.rutgers.edu/~madigan/BBR/.Google ScholarGoogle Scholar
  17. J. Hoeting, D. Madigan, A. Raftery, and C. Volinsky. Bayesian model averaging: a tutorial. Statist. Sci., 14:382--417, 1999.Google ScholarGoogle ScholarCross RefCross Ref
  18. J. Huang, A. J. Smola, A. Gretton, K. M. Borgwardt, and B. Scholkopf. Correcting sample selection bias by unlabeled data. In Proc. of NIPS' 06, pages 601--608. 2007.Google ScholarGoogle Scholar
  19. R. Jacobs, M. Jordan, S. Nowlan, and G. Hinton. Adaptive mixtures of local experts. Neural Computation, 3(1):79--87, 1991. Google ScholarGoogle ScholarCross RefCross Ref
  20. T. Joachims. Making large-scale svm learning practical. advances in kernel methods- support vector learning. MIT-Press, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. G. Karypis. Cluto - family of data clustering software tools. http://glaros.dtc.umn.edu/gkhome/views/cluto.Google ScholarGoogle Scholar
  22. X. Li and J. Bilmes. A Bayesian divergence prior for classifier adaptation. In Proc. of AISTATS' 07, 2007.Google ScholarGoogle Scholar
  23. D.M. Roy and L.P. Kaelbling. Efficient bayesian task-level transfer learning. In Proc. of IJCAI '07. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. S. Satpal and S. Sarawagi. Domain adaptation of conditional probability models via feature subsetting. In Proc. of ECML/PKDD' 07, pages 224--235, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  25. H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2):227--244, 2000.Google ScholarGoogle ScholarCross RefCross Ref
  26. A. Storkey and M. Sugiyama. Mixture regression for covariate shift. In Proc. of NIPS' 06, pages 1337--1344.Google ScholarGoogle Scholar
  27. H. Wang, W. Fan, P. Yu, and J. Han. Mining concept-drifting data streams using ensemble classifiers. In Proc. of KDD'03, pages 226--235, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. X. Zhu. Semi-supervised learning literature survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison, 2005.Google ScholarGoogle Scholar

Index Terms

  1. Knowledge transfer via multiple model local structure mapping

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      KDD '08: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
      August 2008
      1116 pages
      ISBN:9781605581934
      DOI:10.1145/1401890
      • General Chair:
      • Ying Li,
      • Program Chairs:
      • Bing Liu,
      • Sunita Sarawagi

      Copyright © 2008 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 24 August 2008

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      KDD '08 Paper Acceptance Rate118of593submissions,20%Overall Acceptance Rate1,133of8,635submissions,13%

      Upcoming Conference

      KDD '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader