Skip to main content

2016 | OriginalPaper | Buchkapitel

Online Weighted Multi-task Feature Selection

verfasst von : Wei Xue, Wensheng Zhang

Erschienen in: Neural Information Processing

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The goal of multi-task feature selection is to learn explanatory features across multiple related tasks. In this paper, we develop a weighted feature selection model to enhance the sparsity of the learning variables and propose an online algorithm to solve this model. The worst-case bounds of the time complexity and the memory cost of this algorithm at each iteration are both in \(\mathcal {O}(N\times Q)\), where N is the number of feature dimensions and Q is the number of tasks. At each iteration, the learning variables can be solved analytically based on a memory of the previous (sub)gradients and the whole weighted regularization, and the weight coefficients used for the next iteration are updated by the current learned solution. A theoretical analysis for the regret bound of the proposed algorithm is presented, along with experiments on public data demonstrating that it can yield better performance, e.g., in terms of convergence speed and sparsity.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
Our motivation comes from the recently developed reweighted \(l_1\) minimization model in compressive sensing [4, 6]. Due to space limitations, we do not elaborate here.
 
2
Regret is the difference of the objective function value up to the T-th step and the smallest objective function value.
 
Literatur
1.
Zurück zum Zitat Altmann, A., Ng, B.: Joint feature extraction from functional connectivity graphs with multi-task feature learning. In: Proceedings of the International Workshop on Pattern Recognition in NeuroImaging, pp. 29–32 (2015) Altmann, A., Ng, B.: Joint feature extraction from functional connectivity graphs with multi-task feature learning. In: Proceedings of the International Workshop on Pattern Recognition in NeuroImaging, pp. 29–32 (2015)
2.
Zurück zum Zitat Argyriou, A., Evgeniou, T., Pontil, M.: Multi-task feature learning. In: Advances in Neural Information Processing Systems, pp. 41–48 (2006) Argyriou, A., Evgeniou, T., Pontil, M.: Multi-task feature learning. In: Advances in Neural Information Processing Systems, pp. 41–48 (2006)
3.
Zurück zum Zitat Argyriou, A., Evgeniou, T., Pontil, M.: Convex multi-task feature learning. Mach. Learn. 73(3), 243–272 (2008)CrossRef Argyriou, A., Evgeniou, T., Pontil, M.: Convex multi-task feature learning. Mach. Learn. 73(3), 243–272 (2008)CrossRef
4.
Zurück zum Zitat Candès, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted \(l_1\) minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008)MathSciNetCrossRefMATH Candès, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted \(l_1\) minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008)MathSciNetCrossRefMATH
6.
Zurück zum Zitat Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008)
7.
Zurück zum Zitat Dekel, O., Long, P.M., Singer, Y.: Online multitask learning. In: Lugosi, G., Simon, H.U. (eds.) COLT 2006. LNCS (LNAI), vol. 4005, pp. 453–467. Springer, Heidelberg (2006)CrossRef Dekel, O., Long, P.M., Singer, Y.: Online multitask learning. In: Lugosi, G., Simon, H.U. (eds.) COLT 2006. LNCS (LNAI), vol. 4005, pp. 453–467. Springer, Heidelberg (2006)CrossRef
8.
Zurück zum Zitat Evgeniou, T., Micchelli, C.A., Pontil, M.: Learning multiple tasks with kernel methods. J. Mach. Learn. Res. 6, 615–637 (2005)MathSciNetMATH Evgeniou, T., Micchelli, C.A., Pontil, M.: Learning multiple tasks with kernel methods. J. Mach. Learn. Res. 6, 615–637 (2005)MathSciNetMATH
9.
Zurück zum Zitat Liu, A.-A., Su, Y.-T., Nie, W.-Z., Kankanhalli, M.: Hierarchical clustering multi-task learning for joint human action grouping and recognition. IEEE Trans. Pattern Anal. Mach. Intell. DOI 10.1109/TPAMI.2016.2537337 Liu, A.-A., Su, Y.-T., Nie, W.-Z., Kankanhalli, M.: Hierarchical clustering multi-task learning for joint human action grouping and recognition. IEEE Trans. Pattern Anal. Mach. Intell. DOI 10.​1109/​TPAMI.​2016.​2537337
10.
Zurück zum Zitat Liu, J., Ji, S., Ye, J.: Multi-task feature learning via efficient \(l_{2,1}\) norm minimization. In: Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, pp. 339–348 (2009) Liu, J., Ji, S., Ye, J.: Multi-task feature learning via efficient \(l_{2,1}\) norm minimization. In: Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, pp. 339–348 (2009)
11.
Zurück zum Zitat Luo, Y., Wen, Y., Tao, D., Gui, J., Xu, C.: Large margin multi-modal multi-task feature extraction for image classification. IEEE Trans. Image Process. 25(1), 414–427 (2016)MathSciNetCrossRef Luo, Y., Wen, Y., Tao, D., Gui, J., Xu, C.: Large margin multi-modal multi-task feature extraction for image classification. IEEE Trans. Image Process. 25(1), 414–427 (2016)MathSciNetCrossRef
12.
Zurück zum Zitat Nesterov, Y.: A method of solving a convex programming problem with convergence rate \(O(1/k^2)\). Sov. Math. Dokl. 27(2), 372–376 (1983)MATH Nesterov, Y.: A method of solving a convex programming problem with convergence rate \(O(1/k^2)\). Sov. Math. Dokl. 27(2), 372–376 (1983)MATH
14.
Zurück zum Zitat Obozinski, G., Taskar, B., Jordan, M.I.: Joint covariate selection and joint subspace selection for multiple classification problems. Statist. Comput. 20(2), 231–252 (2010)MathSciNetCrossRef Obozinski, G., Taskar, B., Jordan, M.I.: Joint covariate selection and joint subspace selection for multiple classification problems. Statist. Comput. 20(2), 231–252 (2010)MathSciNetCrossRef
15.
Zurück zum Zitat Quattoni, A., Carreras, X., Collins, M., Darrell, T.: An efficient projection for \(l_{1,\infty }\) regularization. In: Proceedings of the 26th International Conference on Machine Learning, pp. 857–864 (2009) Quattoni, A., Carreras, X., Collins, M., Darrell, T.: An efficient projection for \(l_{1,\infty }\) regularization. In: Proceedings of the 26th International Conference on Machine Learning, pp. 857–864 (2009)
16.
Zurück zum Zitat Xiao, L.: Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 11, 2543–2596 (2010)MathSciNetMATH Xiao, L.: Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 11, 2543–2596 (2010)MathSciNetMATH
17.
Zurück zum Zitat Yang, H., Lyu, M.R., King, I.: Efficient online learning for multitask feature selection. ACM Trans. Knowl. Discov. Data 7(2), 1–27 (2013)CrossRef Yang, H., Lyu, M.R., King, I.: Efficient online learning for multitask feature selection. ACM Trans. Knowl. Discov. Data 7(2), 1–27 (2013)CrossRef
18.
Zurück zum Zitat Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. J. Roy. Statist. Soc. B 68(1), 49–67 (2006)MathSciNetCrossRefMATH Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. J. Roy. Statist. Soc. B 68(1), 49–67 (2006)MathSciNetCrossRefMATH
19.
Zurück zum Zitat Zhang, Y., Yeung, D.-Y., Xu, Q.: Probabilistic multi-task feature selection. In: Advances in Neural Information Processing Systems, pp. 2559–2567 (2010) Zhang, Y., Yeung, D.-Y., Xu, Q.: Probabilistic multi-task feature selection. In: Advances in Neural Information Processing Systems, pp. 2559–2567 (2010)
20.
Zurück zum Zitat Zhou, Q., Zhao, Q.: Flexible clustered multi-task learning by learning representative tasks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 266–278 (2016)CrossRef Zhou, Q., Zhao, Q.: Flexible clustered multi-task learning by learning representative tasks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 266–278 (2016)CrossRef
Metadaten
Titel
Online Weighted Multi-task Feature Selection
verfasst von
Wei Xue
Wensheng Zhang
Copyright-Jahr
2016
DOI
https://doi.org/10.1007/978-3-319-46672-9_23