Swipe to navigate through the articles of this issue
Feature embedding is an emerging research area which intends to transform features from the original space into a new space to support effective learning. Many feature embedding algorithms exist, but they often suffer from several major drawbacks, including (1) only handle single feature types, or users have to clearly separate features into different feature views and supply such information for feature embedding learning; (2) designed for either supervised or unsupervised learning tasks, but not for both; and (3) feature embedding for new out-of-training samples have to be obtained through a retraining phase, therefore unsuitable for online learning tasks. In this paper, we propose a generalized feature embedding algorithm, GEL, for both supervised, unsupervised, and online learning tasks. GEL learns feature embedding from any type of data or data with mixed feature types. For supervised learning tasks with class label information, GEL leverages a Class Partitioned Instance Representation (CPIR) process to arrange instances, based on their labels, as a dense binary representation via row and feature vectors for feature embedding learning. If class labels are unavailable, CPIR is naturally degenerated and treats all instances as one class. Based on the CPIR representation, GEL uses eigenvector decomposition to convert the proximity matrix into a low-dimensional space. For new out-of-training samples, their low-dimensional representation are derived through a direct conversion without a retraining phase. The learned numerical embedding features can be directly used to represent instances for effective learning. Experiments and comparisons on 28 datasets, including categorical, numerical, and ordinal features, demonstrate that embedding features learned from GEL can effectively represent the original instances for clustering, classification, and online learning.
Please log in to get access to this content
To get access to this content you need the following product:
Abdi, H., & Valentin, D. (2007). Multiple correspondence analysis. In Encyclopedia of measurement and statistics (pp. 651–657).
Alamuri, M., Surampudi, B.R., Negi, A. (2014). A survey of distance/similarity measures for categorical data. In 2014 International joint conference on neural networks (IJCNN) (pp. 1907–1914).
Aljarah, I. (2016). https://www.kaggle.com/aljarah/xapi-edu-data.
Argyriou, A., & Evgeniou, T. (2007). Multi-task feature learning. In Proceedings of neural information processing systems (NIPS).
Argyriou, A., Evgeniou, T., Pontil, M. (2008). Convex multi-task feature learning. Machine Learning, 73(3), 243–272. CrossRef
Axler, S.J. (1997). Linear algebra done right Vol. 2. Berlin: Springer. CrossRef
Babenko, A., & Lempitsky, V. (2015). Aggregating local deep features for image retrieval. In Proceedings of the IEEE international conference on computer vision (pp. 1269–1277).
Bates, D., & Eddelbuettel, D. (2013). Fast and elegant numerical linear algebra using the RcppEigen package. Journal of Statistical Software, 52(5), 1–24. CrossRef
Benoit, K., & Nulty, P. (2016). quanteda: quantitative analysis of textual data. R package version 0.9, 8.
Bro, R., & Smilde, A.K. (2014). Principal component analysis. Analytical Methods, 6(9), 2812–2831. CrossRef
Chen, C., Shyu, M.-L., Chen, S.-C. (2016). Weighted subspace modeling for semantic concept retrieval using gaussian mixture models. Information Systems Frontiers, 18(5), 877–889. CrossRef
Choi, S.-S., Cha, S.-H., Tappert, C.C. (2010). A survey of binary similarity and distance measures. Journal of Systemics, Cybernetics and Informatics, 8(1), 43–48.
Crane, H. (2015). Clustering from categorical data sequences. Journal of the American Statistical Association, 110(510), 810–823. CrossRef
de Leeuw, J. (2011). Principal component analysis of binary data. applications to roll-call-analysis. Department of statistics, UCLA.
Ditzler, G., & Polikar, R. (2013). Incremental learning of concept drift from streaming imbalanced data. ieee transactions on knowledge and data engineering, 25(10), 2283–2301. CrossRef
Domingos, P. (2012). A few useful things to know about machine learning. Communications of the ACM, 55 (10), 78–87. CrossRef
Gal, Y., Chen, Y., Ghahramani, Z. (2015). Latent gaussian processes for distribution estimation of multivariate categorical data. In Proceedings of the 32nd international conference on machine learning (ICML-15) (pp. 645–654).
Gelbard, R. (2013). padding bitmaps to support similarity and mining. Information Systems Frontiers, 15(1), 99–110. CrossRef
Golinko, E., & Zhu, X. (2017). Gfel: Generalized feature embedding learning using weighted instance matching. In 2017 IEEE International conference on information reuse and integration (IRI) (pp. 235–244).
Greenacre, M. (2007). Correspondence analysis in practice. CRC press.
Greene, D. (2016). http://mlg.ucd.ie/datasets/bbc.html.
Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of machine learning research 3:1157–1182.
Hou, C., Nie, F., Li, X., Yi, D., Wu, Y. (2014). Joint embedding learning and sparse regression: A framework for unsupervised feature selection. IEEE Transactions on Cybernetics, 44(6), 793–804. CrossRef
Hsu, C.-W., Chang, C.-C., Lin, C.-J., et al. (2003). A practical guide to support vector classification.
Hsu, C.-C., & Huang, W.-H. (2016). Integrated dimensionality reduction technique for mixed-type data involving categorical values. Applied Soft Computing, 43, 199–209. CrossRef
Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T. (2014). Caffe: Convolutional architecture for fast feature embedding. In Proceedings of ACM multimedia conference.
Juan, A., & Vidal, E. (2004). Bernoulli mixture models for binary images. In Proceedings of the 17th international conference on Pattern recognition, 2004. ICPR 2004, (Vol. 3 pp. 367–370). IEEE.
Kaban, A., Bingham, E., Hirsimäki, T. (2004). Learning to read between the lines The aspect bernoulli model. In Proceedings of the 2004 SIAM international conference on data mining (pp. 462–466). SIAM.
Kaggle. (2017). https://www.kaggle.com.
Krijthe, J. (2015). Rtsne: T-distributed stochastic neighbor embedding using barnes-hut implementation. R package version 0.10, http://CRAN.R-project.org/package=Rtsne.
Lee, S. (2009). Principal components analysis for binary data. PhD thesis: Texas A&M University.
Lee, S., Huang, J.Z., Hu, J. (2010). Sparse logistic principal components analysis for binary data. The annals of applied statistics, 4(3), 1579. CrossRef
Lichman, M. (2013). UCI machine learning repository.
van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov), 2579–2605.
Malik, Z.K., Hussain, A., Wu, J. (2016). An online generalized eigenvalue version of laplacian eigenmaps for visual big data. Neurocomputing, 173, 127–136. CrossRef
Meyer, D., & Buchta, C. proxy: Distance and Similarity Measures, 2016. R package version 0.4-16.
Muhlbaier, M.D., & Polikar, R. (2007). An ensemble approach for incremental learning in nonstationary environments. In International workshop on multiple classifier systems (pp. 490–500). Berlin: Springer.
Müller, B., Reinhardt, J., Strickland, M.T. (2012). Neural networks: an introduction. Berlin: Springer Science & Business Media.
Najafi, A., Motahari, A., Rabiee, H.R. (2017). Reliable learning of bernoulli mixture models. arXiv: 1710.02101.
Nenadic, O., & Greenacre, M. (2007). Correspondence analysis in r, with two-and three-dimensional graphics The ca package. Journal of Statistical Software.
Pan, S., Wu, J.W., Zhu, X., Zhang, C., Wang, Y. (2016). Tri-party deep network representation. In Proc. of international joint conference on artificial intelligence.
Plaza, A., Benediktsson, J.A., Boardman, J.W., Brazile, J., Bruzzone, L., Camps-Valls, G., Chanussot, J., Fauvel, M., Gamba, P., Gualtieri, A., et al. (2009). Recent advances in techniques for hyperspectral image processing. Remote sensing of environment, 113, S110–S122. CrossRef
Qian, Y., Li, F., Liang, J., Liu, B., Dang, C. (2016). Space structure and clustering of categorical data. IEEE transactions on neural networks and learning systems, 27(10), 2047–2059. CrossRef
Ramos, J. (2003). Using tf-idf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learning.
Rokach, L., & Maimon, O. (2005). Decision trees. Data mining and knowledge discovery handbook, pp. 165–192.
Rokach, L., & Maimon, O. (2014). Data mining with decision trees: theory and applications. Singapore: World scientific. CrossRef
Romero, C., Ventura, S., Espejo, P.G., Hervás, C. (2008). Data mining algorithms to classify students. In Educational data mining 2008.
Roweis, S., & Saul, L. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290 (5500), 2323–2326. CrossRef
Shen, L., Wang, H., Xu, L.D., Ma, X., Chaudhry, S., He, W. (2016). Identity management based on pca and svm. Information Systems Frontiers, 18(4), 711–716. CrossRef
Shlens, J. (2014). A tutorial on principal component analysis. arXiv: 1404.1100.
Shmelkov, K., Schmid, C., Alahari, K. (2017). Incremental learning of object detectors without catastrophic forgetting. arXiv: 1708.06977.
Strange, H., & Zwiggelaar, R. (2011). A generalised solution to the out-of-sample extension problem in manifold learning. In AAAI (pp. 293–296).
Sun, B.-Y., Zhang, X.-M., Li, J., Mao, X.-M. (2010). Feature fusion using locally linear embedding for classification. IEEE Transactions on Neural Networks, 21(1), 163–168. CrossRef
Tsymbal, A., Puuronen, S., Pechenizkiy, M., Baumgarten, M., Patterson, D.W. (2002). Eigenvector-based feature extraction for classification. In FLAIRS Conference (pp. 354–358).
Venables, W.N., & Ripley, B.D. (2002). Modern applied statistics with S, 4th edn. New York: Springer. ISBN 0-387-95457-0. CrossRef
Vural, E., & Guillemot, C. (2016). Out-of-sample generalizations for supervised manifold learning for classification. IEEE Transactions on Image Processing, 25(3), 1410–1424. CrossRef
Xie, J., Szymanski, B.K., Zaki, M.J. (2010). Learning dissimilarities for categorical symbols. FSDM, 10, 97–106.
Zhang, D., Yin, J., Zhu, X., Zhang, C. (2017). User profile preserving social network embedding. In Proc. of international joint conference on artificial intelligence.
Zhang, H. (2004). The optimality of naive bayes. AA, 1(2), 3.
Zhang, L., Zhang, Q., Zhang, L., Tao, D., Huang, X., Du, B. (2015). Ensemble manifold regularized sparse low-rank approximation for multiview feature embedding. Pattern Recognition, 48(10).
Zhang, P., Zhu, X., Shi, Y. (2008). Categorizing and mining concept drifting data streams. In ACM SIGKDD Conference (pp. 812–820).
Zheng, L., Wang, S., Tian, Q. (2014). Coupled binary embedding for large-scale image retrieval. IEEE Transactions on Image processing, 23(8), 3368–3380. CrossRef
- Generalized Feature Embedding for Supervised, Unsupervised, and Online Learning Tasks
- Publication date
- Springer US
Neuer Inhalt/© ITandMEDIA