Skip to main content
Erschienen in: Neural Computing and Applications 15/2022

16.03.2022 | Original Article

Multitask transfer learning with kernel representation

verfasst von: Yulu Zhang, Shihui Ying, Zhijie Wen

Erschienen in: Neural Computing and Applications | Ausgabe 15/2022

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In many real-world applications, collecting and labeling the data is expensive and time-consuming. Thus, there is a need to obtain a high-performance learner by leveraging the data or knowledge from other domains. Transfer learning is a promising method to solve the above problems. In this paper, we propose a multitask transfer learning method, which aims to improve the performance of the target learner by transferring knowledge from the related source tasks. First, we formulate the target learner as a nonlinear function, which is approximated by the linear combination of the eigenfunctions. Further, to transfer knowledge from the source tasks, we constrain the target model to be the linear combination of the source models according to the previous work. However, knowledge from some source tasks may not be useful for adaptation, so we add a sparse constraint to the objective function to select the related source tasks. Different from previous transfer learning methods, our method transfers knowledge by jointly learning the source tasks and the target task. Besides, it can select the source tasks associated with the target task by the sparse constraint. Empirically, the method exhibits protection against negative transfer. Finally, we compare our proposed method with three single-task learning methods and six state-of-the-art multitask learning methods on two data sets. When compared with the second best results, the nMSE of our method achieves a relative improvement of \(10.85\%\) with a training size of 100 on the SARCOS data set and a relative improvement of \(4.26\%\) with a training ratio of \(20\%\) on the Isolet data set. Experimental results show that our proposed method can effectively improve the performance of the target task by transferring knowledge from the related source tasks.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat AghaeiRad A, Chen N, Ribeiro B (2017) Improve credit scoring using transfer of learned knowledge from self-organizing map. Neural Comput Appl 28(6):1329–1342CrossRef AghaeiRad A, Chen N, Ribeiro B (2017) Improve credit scoring using transfer of learned knowledge from self-organizing map. Neural Comput Appl 28(6):1329–1342CrossRef
2.
Zurück zum Zitat Argyriou A, Evgeniou T, Pontil M (2008) Convex multi-task feature learning. Mach Learn 73(3):243–272CrossRef Argyriou A, Evgeniou T, Pontil M (2008) Convex multi-task feature learning. Mach Learn 73(3):243–272CrossRef
3.
Zurück zum Zitat Blitzer J, McDonald R, Pereira F (2006) Domain adaptation with structural correspondence learning. In: conference on empirical methods in natural language processing, pp 120–128 Blitzer J, McDonald R, Pereira F (2006) Domain adaptation with structural correspondence learning. In: conference on empirical methods in natural language processing, pp 120–128
4.
Zurück zum Zitat Boyd S, Parikh N, Chu E, Peleato B, Eckstein J (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn 3(1):1–122CrossRef Boyd S, Parikh N, Chu E, Peleato B, Eckstein J (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn 3(1):1–122CrossRef
5.
Zurück zum Zitat Chen C, Jiang B, Jin X (2018) Parameter transfer extreme learning machine based on projective model. In: International joint conference on neural networks, pp 1–8 Chen C, Jiang B, Jin X (2018) Parameter transfer extreme learning machine based on projective model. In: International joint conference on neural networks, pp 1–8
6.
Zurück zum Zitat Chen J, Liu J, Ye J (2012) Learning incoherent sparse and low-rank patterns from multiple tasks. ACM Trans Knowl Disc Data 5(4):1–31CrossRef Chen J, Liu J, Ye J (2012) Learning incoherent sparse and low-rank patterns from multiple tasks. ACM Trans Knowl Disc Data 5(4):1–31CrossRef
7.
Zurück zum Zitat Chen J, Zhou J, Ye J (2011) Integrating low-rank and group-sparse structures for robust multi-task learning. In: 17th ACM SIGKDD International conference on knowledge discovery and data mining, pp 42–50 Chen J, Zhou J, Ye J (2011) Integrating low-rank and group-sparse structures for robust multi-task learning. In: 17th ACM SIGKDD International conference on knowledge discovery and data mining, pp 42–50
8.
Zurück zum Zitat Duan L, Tsang IW, Xu D, Chua TS (2009) Domain adaptation from multiple sources via auxiliary classifiers. In: 26th annual international conference on machine learning, pp 289–296 Duan L, Tsang IW, Xu D, Chua TS (2009) Domain adaptation from multiple sources via auxiliary classifiers. In: 26th annual international conference on machine learning, pp 289–296
9.
Zurück zum Zitat Evgeniou T, Pontil M (2004) Regularized multi–task learning. In: 10th ACM SIGKDD international conference on Knowledge discovery and data mining, pp 109–117 Evgeniou T, Pontil M (2004) Regularized multi–task learning. In: 10th ACM SIGKDD international conference on Knowledge discovery and data mining, pp 109–117
10.
Zurück zum Zitat Fernando B, Habrard A, Sebban M, Tuytelaars T (2013) Unsupervised visual domain adaptation using subspace alignment. In: IEEE International conference on computer vision, pp 2960–2967 Fernando B, Habrard A, Sebban M, Tuytelaars T (2013) Unsupervised visual domain adaptation using subspace alignment. In: IEEE International conference on computer vision, pp 2960–2967
11.
Zurück zum Zitat Gong B, Shi Y, Sha F, Grauman K (2012) Geodesic flow kernel for unsupervised domain adaptation. In: IEEE conference on computer vision and pattern recognition, pp 2066–2073 Gong B, Shi Y, Sha F, Grauman K (2012) Geodesic flow kernel for unsupervised domain adaptation. In: IEEE conference on computer vision and pattern recognition, pp 2066–2073
12.
Zurück zum Zitat Gong P, Ye J, Zhang C (2012) Robust multi-task feature learning. In: 18th ACM SIGKDD International conference on knowledge discovery and data mining, pp 895–903 Gong P, Ye J, Zhang C (2012) Robust multi-task feature learning. In: 18th ACM SIGKDD International conference on knowledge discovery and data mining, pp 895–903
13.
Zurück zum Zitat Guo X, Zhou DX (2012) An empirical feature-based learning algorithm producing sparse approximations. Appl Comput Harmon Anal 32(3):389–400MathSciNetCrossRef Guo X, Zhou DX (2012) An empirical feature-based learning algorithm producing sparse approximations. Appl Comput Harmon Anal 32(3):389–400MathSciNetCrossRef
14.
Zurück zum Zitat Haase D, Rodner E, Denzler J (2014) Instance-weighted transfer learning of active appearance models. In: IEEE conference on computer vision and pattern recognition, pp 1426–1433 Haase D, Rodner E, Denzler J (2014) Instance-weighted transfer learning of active appearance models. In: IEEE conference on computer vision and pattern recognition, pp 1426–1433
15.
Zurück zum Zitat Han L, Zhang Y (2015) Learning multi-level task groups in multi-task learning. In: 29th AAAI conference on artificial intelligence, vol 29 Han L, Zhang Y (2015) Learning multi-level task groups in multi-task learning. In: 29th AAAI conference on artificial intelligence, vol 29
16.
Zurück zum Zitat Hua J, Zeng L, Li G, Ju Z (2021) Learning for a robot: deep reinforcement learning, imitation learning, transfer learning. Sensors 21(4):1278CrossRef Hua J, Zeng L, Li G, Ju Z (2021) Learning for a robot: deep reinforcement learning, imitation learning, transfer learning. Sensors 21(4):1278CrossRef
17.
Zurück zum Zitat Jalali A, Ravikumar P, Sanghavi S (2013) A dirty model for multiple sparse regression. IEEE Trans Inf Theory 59(12):7947–7968MathSciNetCrossRef Jalali A, Ravikumar P, Sanghavi S (2013) A dirty model for multiple sparse regression. IEEE Trans Inf Theory 59(12):7947–7968MathSciNetCrossRef
18.
Zurück zum Zitat Kandaswamy C, Monteiro JC, Silva LM, Cardoso JS (2017) Multi-source deep transfer learning for cross-sensor biometrics. Neural Comput Appl 28(9):2461–2475CrossRef Kandaswamy C, Monteiro JC, Silva LM, Cardoso JS (2017) Multi-source deep transfer learning for cross-sensor biometrics. Neural Comput Appl 28(9):2461–2475CrossRef
19.
Zurück zum Zitat Karimpanal TG, Bouffanais R (2018) Experience replay using transition sequences. Frontiers Neurorobot 12:32CrossRef Karimpanal TG, Bouffanais R (2018) Experience replay using transition sequences. Frontiers Neurorobot 12:32CrossRef
20.
Zurück zum Zitat Karimpanal TG, Bouffanais R (2018) Self-organizing maps as a storage and transfer mechanism in reinforcement learning. arXiv preprint arXiv:1807.07530 Karimpanal TG, Bouffanais R (2018) Self-organizing maps as a storage and transfer mechanism in reinforcement learning. arXiv preprint arXiv:​1807.​07530
21.
Zurück zum Zitat Li J, Chen E, Ding Z, Zhu L, Lu K, Shen HT (2020) Maximum density divergence for domain adaptation. IEEE transactions on pattern analysis and machine intelligence pp 1 Li J, Chen E, Ding Z, Zhu L, Lu K, Shen HT (2020) Maximum density divergence for domain adaptation. IEEE transactions on pattern analysis and machine intelligence pp 1
22.
Zurück zum Zitat Liu J, Ye J (2009) Efficient Euclidean projections in linear time. In: 26th annual international conference on machine learning, pp 657–664 Liu J, Ye J (2009) Efficient Euclidean projections in linear time. In: 26th annual international conference on machine learning, pp 657–664
23.
Zurück zum Zitat Liu N, Zhang B, Liu B, Shi J, Yang L, Li Z, Zhu J (2021) Transfer subspace learning for unsupervised cross-corpus speech emotion recognition. IEEE Access 9:95925–95937CrossRef Liu N, Zhang B, Liu B, Shi J, Yang L, Li Z, Zhu J (2021) Transfer subspace learning for unsupervised cross-corpus speech emotion recognition. IEEE Access 9:95925–95937CrossRef
24.
Zurück zum Zitat Long M, Cao Y, Wang J, Jordan M (2015) Learning transferable features with deep adaptation networks. In: 32nd annual international conference on machine learning, pp 97–105 Long M, Cao Y, Wang J, Jordan M (2015) Learning transferable features with deep adaptation networks. In: 32nd annual international conference on machine learning, pp 97–105
25.
Zurück zum Zitat Long M, Wang J, Ding G, Shen D, Yang Q (2013) Transfer learning with graph co-regularization. IEEE Trans Knowl Data Eng 26(7):1805–1818CrossRef Long M, Wang J, Ding G, Shen D, Yang Q (2013) Transfer learning with graph co-regularization. IEEE Trans Knowl Data Eng 26(7):1805–1818CrossRef
26.
Zurück zum Zitat Long M, Zhu H, Wang J, Jordan MI (2017) Deep transfer learning with joint adaptation networks. In: 34th International conference on machine learning, pp 2208–2217 Long M, Zhu H, Wang J, Jordan MI (2017) Deep transfer learning with joint adaptation networks. In: 34th International conference on machine learning, pp 2208–2217
27.
Zurück zum Zitat Maurer A, Pontil M, Romera-Paredes B (2013) Sparse coding for multitask and transfer learning. In: 30th annual international conference on machine learning, pp 343–351 Maurer A, Pontil M, Romera-Paredes B (2013) Sparse coding for multitask and transfer learning. In: 30th annual international conference on machine learning, pp 343–351
28.
Zurück zum Zitat Naseer A, Rani M, Naz S, Razzak MI, Imran M, Xu G (2020) Refining parkinson’s neurological disorder identification through deep transfer learning. Neural Comput Appl 32(3):839–854CrossRef Naseer A, Rani M, Naz S, Razzak MI, Imran M, Xu G (2020) Refining parkinson’s neurological disorder identification through deep transfer learning. Neural Comput Appl 32(3):839–854CrossRef
29.
Zurück zum Zitat da Nobrega R.V.M, Reboucas Filho P.P, Rodrigues M.B, da Silva S.P, Dourado Junior C.M, de Albuquerque V.H.C (2020) Lung nodule malignancy classification in chest computed tomography images using transfer learning and convolutional neural networks. Neural Comput Appl 32(15):11065–11082CrossRef da Nobrega R.V.M, Reboucas Filho P.P, Rodrigues M.B, da Silva S.P, Dourado Junior C.M, de Albuquerque V.H.C (2020) Lung nodule malignancy classification in chest computed tomography images using transfer learning and convolutional neural networks. Neural Comput Appl 32(15):11065–11082CrossRef
30.
Zurück zum Zitat Orabona F, Castellini C, Caputo B, Fiorilla AE, Sandini G (2009) Model adaptation with least-squares svm for adaptive hand prosthetics. In: IEEE International conference on robotics and automation, pp 2897–2903 Orabona F, Castellini C, Caputo B, Fiorilla AE, Sandini G (2009) Model adaptation with least-squares svm for adaptive hand prosthetics. In: IEEE International conference on robotics and automation, pp 2897–2903
31.
Zurück zum Zitat Ozcan T, Basturk A (2019) Transfer learning-based convolutional neural networks with heuristic optimization for hand gesture recognition. Neural Comput Appl 31(12):8955–8970CrossRef Ozcan T, Basturk A (2019) Transfer learning-based convolutional neural networks with heuristic optimization for hand gesture recognition. Neural Comput Appl 31(12):8955–8970CrossRef
32.
Zurück zum Zitat Pan SJ, Tsang IW, Kwok JT, Yang Q (2010) Domain adaptation via transfer component analysis. IEEE Trans Neural Netw 22(2):199–210CrossRef Pan SJ, Tsang IW, Kwok JT, Yang Q (2010) Domain adaptation via transfer component analysis. IEEE Trans Neural Netw 22(2):199–210CrossRef
33.
Zurück zum Zitat Pan SJ, Yang Q (2009) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359CrossRef Pan SJ, Yang Q (2009) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359CrossRef
34.
Zurück zum Zitat Pong TK, Tseng P, Ji S, Ye J (2010) Trace norm regularization: reformulations, algorithms, and multi-task learning. SIAM J Optim 20(6):3465–3489MathSciNetCrossRef Pong TK, Tseng P, Ji S, Ye J (2010) Trace norm regularization: reformulations, algorithms, and multi-task learning. SIAM J Optim 20(6):3465–3489MathSciNetCrossRef
35.
Zurück zum Zitat Raina R, Battle A, Lee H, Packer B, Ng AY (2007) Self-taught learning: transfer learning from unlabeled data. In: 24th annual international conference on machine learning, pp 759–766 Raina R, Battle A, Lee H, Packer B, Ng AY (2007) Self-taught learning: transfer learning from unlabeled data. In: 24th annual international conference on machine learning, pp 759–766
36.
Zurück zum Zitat Rosenstein MT, Marx Z, Kaelbling LP, Dietterich TG. (2005)To transfer or not to transfer. In: Neural Information Processing Systems Workshop on Inductive Transfer: 10 Years Later Rosenstein MT, Marx Z, Kaelbling LP, Dietterich TG. (2005)To transfer or not to transfer. In: Neural Information Processing Systems Workshop on Inductive Transfer: 10 Years Later
37.
Zurück zum Zitat Tan B, Zhang Y, Pan S, Yang Q (2017) Distant domain transfer learning. In: Proceedings of the AAAI conference on artificial intelligence, vol 31 Tan B, Zhang Y, Pan S, Yang Q (2017) Distant domain transfer learning. In: Proceedings of the AAAI conference on artificial intelligence, vol 31
38.
Zurück zum Zitat Tian X, Li Y, Liu T, Wang X, Tao D (2018) Eigenfunction-based multitask learning in a reproducing kernel Hilbert space. IEEE Trans Neural Netw Learn Syst 30(6):1818–1830MathSciNetCrossRef Tian X, Li Y, Liu T, Wang X, Tao D (2018) Eigenfunction-based multitask learning in a reproducing kernel Hilbert space. IEEE Trans Neural Netw Learn Syst 30(6):1818–1830MathSciNetCrossRef
39.
Zurück zum Zitat Tommasi T, Orabona F, Caputo B (2010) Safety in numbers: Learning categories from few examples with multi model knowledge transfer. In: IEEE conference on computer vision and pattern recognition, pp 3081–3088 Tommasi T, Orabona F, Caputo B (2010) Safety in numbers: Learning categories from few examples with multi model knowledge transfer. In: IEEE conference on computer vision and pattern recognition, pp 3081–3088
40.
Zurück zum Zitat Tzeng E, Hoffman J, Zhang N, Saenko K, Darrell T (2014) Deep domain confusion: maximizing for domain invariance. arXiv preprint arXiv:1412.3474 Tzeng E, Hoffman J, Zhang N, Saenko K, Darrell T (2014) Deep domain confusion: maximizing for domain invariance. arXiv preprint arXiv:​1412.​3474
41.
Zurück zum Zitat Wang H, Xu A, Wang S, Chughtai S (2018) Cross domain adaptation by learning partially shared classifiers and weighting source data points in the shared subspaces. Neural Comput Appl 29(6):237–248CrossRef Wang H, Xu A, Wang S, Chughtai S (2018) Cross domain adaptation by learning partially shared classifiers and weighting source data points in the shared subspaces. Neural Comput Appl 29(6):237–248CrossRef
42.
Zurück zum Zitat Wang Y, Zhang L, Wang L, Wang Z (2018) Multitask learning for object localization with deep reinforcement learning. IEEE Trans Cognit Dev Syst 11(4):573–580CrossRef Wang Y, Zhang L, Wang L, Wang Z (2018) Multitask learning for object localization with deep reinforcement learning. IEEE Trans Cognit Dev Syst 11(4):573–580CrossRef
43.
Zurück zum Zitat Wang Z, Wang X, Liu F, Gao P, Ni Y (2021) Adaptative balanced distribution for domain adaptation with strong alignment. IEEE Access 9:100665–100676CrossRef Wang Z, Wang X, Liu F, Gao P, Ni Y (2021) Adaptative balanced distribution for domain adaptation with strong alignment. IEEE Access 9:100665–100676CrossRef
44.
Zurück zum Zitat Weiss K, Khoshgoftaar TM, Wang D (2016) A survey of transfer learning. J Big Data 3(1):1–40CrossRef Weiss K, Khoshgoftaar TM, Wang D (2016) A survey of transfer learning. J Big Data 3(1):1–40CrossRef
45.
Zurück zum Zitat Yang J, Yan R, Hauptmann AG (2007) Adapting SVM classifiers to data with shifted distributions. In: IEEE International conference on data mining workshops, pp 69–76 Yang J, Yan R, Hauptmann AG (2007) Adapting SVM classifiers to data with shifted distributions. In: IEEE International conference on data mining workshops, pp 69–76
46.
Zurück zum Zitat Yang Z, Yu W, Liang P, Guo H, Xia L, Zhang F, Ma Y, Ma J (2019) Deep transfer learning for military object recognition under small training set condition. Neural Comput Appl 31(10):6469–6478CrossRef Yang Z, Yu W, Liang P, Guo H, Xia L, Zhang F, Ma Y, Ma J (2019) Deep transfer learning for military object recognition under small training set condition. Neural Comput Appl 31(10):6469–6478CrossRef
47.
Zurück zum Zitat Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp 3320–3328 Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp 3320–3328
48.
Zurück zum Zitat Yu L, Wang X, Wang X, Zeng Z (2020) Improving robustness of deep transfer model by double transfer learning. In: IEEE International conference on advanced computational intelligence, pp 356–363 Yu L, Wang X, Wang X, Zeng Z (2020) Improving robustness of deep transfer model by double transfer learning. In: IEEE International conference on advanced computational intelligence, pp 356–363
49.
Zurück zum Zitat Yuan F, Yao L, Benatallah B (2019) Darec: Deep domain adaptation for cross-domain recommendation via transferring rating patterns. arXiv preprint arXiv:1905.10760 Yuan F, Yao L, Benatallah B (2019) Darec: Deep domain adaptation for cross-domain recommendation via transferring rating patterns. arXiv preprint arXiv:​1905.​10760
50.
Zurück zum Zitat Zhang C, Wang Z (2021) Provably efficient multi-task reinforcement learning with model transfer. Neural Information Processing Systems 34 Zhang C, Wang Z (2021) Provably efficient multi-task reinforcement learning with model transfer. Neural Information Processing Systems 34
51.
Zurück zum Zitat Zhang J, Liu J, Pan B, Shi Z (2020) Domain adaptation based on correlation subspace dynamic distribution alignment for remote sensing image scene classification. IEEE Trans Geosci Remote Sens 58(11):7920–7930CrossRef Zhang J, Liu J, Pan B, Shi Z (2020) Domain adaptation based on correlation subspace dynamic distribution alignment for remote sensing image scene classification. IEEE Trans Geosci Remote Sens 58(11):7920–7930CrossRef
52.
Zurück zum Zitat Zhang L, Fu J, Wang S, Zhang D, Dong Z, Chen CP (2020) Guide subspace learning for unsupervised domain adaptation. IEEE Trans Neural Netw Learn Syst 31(9):3374–3388MathSciNetCrossRef Zhang L, Fu J, Wang S, Zhang D, Dong Z, Chen CP (2020) Guide subspace learning for unsupervised domain adaptation. IEEE Trans Neural Netw Learn Syst 31(9):3374–3388MathSciNetCrossRef
53.
Zurück zum Zitat Zhang L, Zuo W, Zhang D (2016) LSDT: latent sparse domain transfer learning for visual adaptation. IEEE Trans Image Process 25(3):1177–1191MathSciNetCrossRef Zhang L, Zuo W, Zhang D (2016) LSDT: latent sparse domain transfer learning for visual adaptation. IEEE Trans Image Process 25(3):1177–1191MathSciNetCrossRef
54.
Zurück zum Zitat Zhang Y, Yang Q (2017) Learning sparse task relations in multi-task learning. In: 31st AAAI conference on artificial intelligence, pp 2914–2920 Zhang Y, Yang Q (2017) Learning sparse task relations in multi-task learning. In: 31st AAAI conference on artificial intelligence, pp 2914–2920
55.
Zurück zum Zitat Zhang Y, Yeung DY (2010) A convex formulation for learning task relationships in multi-task learning. In: conference on uncertainty in artificial intelligence Zhang Y, Yeung DY (2010) A convex formulation for learning task relationships in multi-task learning. In: conference on uncertainty in artificial intelligence
56.
Zurück zum Zitat Zhang Y, Yeung DY (2014) A regularization approach to learning task relationships in multitask learning. ACM Trans Knowl Discov Data 8(3):1–31CrossRef Zhang Y, Yeung DY (2014) A regularization approach to learning task relationships in multitask learning. ACM Trans Knowl Discov Data 8(3):1–31CrossRef
57.
Zurück zum Zitat Zhou Q, Zhao Q (2015) Flexible clustered multi-task learning by learning representative tasks. IEEE Trans Pattern Anal Mach Intell 38(2):266–278CrossRef Zhou Q, Zhao Q (2015) Flexible clustered multi-task learning by learning representative tasks. IEEE Trans Pattern Anal Mach Intell 38(2):266–278CrossRef
58.
Zurück zum Zitat Zhu Y, Zhuang F, Wang J, Ke G, Chen J, Bian J, Xiong H, He Q (2020) Deep subdomain adaptation network for image classification. IEEE Trans Neural Netw Learn Syst 32(4):1713–1722MathSciNetCrossRef Zhu Y, Zhuang F, Wang J, Ke G, Chen J, Bian J, Xiong H, He Q (2020) Deep subdomain adaptation network for image classification. IEEE Trans Neural Netw Learn Syst 32(4):1713–1722MathSciNetCrossRef
Metadaten
Titel
Multitask transfer learning with kernel representation
verfasst von
Yulu Zhang
Shihui Ying
Zhijie Wen
Publikationsdatum
16.03.2022
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 15/2022
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-022-07126-3

Weitere Artikel der Ausgabe 15/2022

Neural Computing and Applications 15/2022 Zur Ausgabe

S.I.: Machine Learning based semantic representation and analytics for multimedia application

Target recognition method of small UAV remote sensing image based on fuzzy clustering

S.I.: Machine Learning based semantic representation and analytics for multimedia application

Supply chain security evaluation model and index system based on a 5G information system

S.I. : Machine Learning based semantic representation and analytics for multimedia application

The correlation between green finance and carbon emissions based on improved neural network

Premium Partner