Skip to main content
Erschienen in: Pattern Analysis and Applications 3/2023

15.06.2023 | Theoretical Advances

Self-label correction for image classification with noisy labels

verfasst von: Yu Zhang, Fan Lin, Siya Mi, Yali Bian

Erschienen in: Pattern Analysis and Applications | Ausgabe 3/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Label noise is inevitable in image classification. Existing methods usually lack the reliability of selecting clean data samples and rely on an auxiliary model to correct clean samples, which quality has a great impact on the classification results. In this paper, we propose the Dual-model and Self-Label Correction (DSLC) method to select clean samples and correct labels without auxiliary models. First, we use a dual-model structure combining contrastive learning to select clean samples. Then, we design a novel label correction method to modify the noisy labels. Finally, we propose a joint loss to improve the generalization ability of our models. In the experiment, we demonstrate the effectiveness of DSLC on various datasets, which achieves comparable performance to state-of-the-art methods.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Li Z, Liu J, Tang J, Lu H (2015) Robust structured subspace learning for data representation. IEEE Trans Pattern Anal Mach Intell 37(10):2085–2098CrossRef Li Z, Liu J, Tang J, Lu H (2015) Robust structured subspace learning for data representation. IEEE Trans Pattern Anal Mach Intell 37(10):2085–2098CrossRef
2.
Zurück zum Zitat Li Z, Tang J (2016) Weakly supervised deep matrix factorization for social image understanding. IEEE Trans Image Process 26(1):276–288MathSciNetCrossRefMATH Li Z, Tang J (2016) Weakly supervised deep matrix factorization for social image understanding. IEEE Trans Image Process 26(1):276–288MathSciNetCrossRefMATH
3.
Zurück zum Zitat Li Z, Tang J, Mei T (2018) Deep collaborative embedding for social image understanding. IEEE Trans Pattern Anal Mach Intell 41(9):2070–2083CrossRef Li Z, Tang J, Mei T (2018) Deep collaborative embedding for social image understanding. IEEE Trans Pattern Anal Mach Intell 41(9):2070–2083CrossRef
4.
Zurück zum Zitat Xie GS, Liu L, Zhu F, Zhao F, Zhang Z, Yao Y, Qin J, Shao L (2020) Region graph embedding network for zero-shot learning. In: ECCV. Springer, pp 562–580 Xie GS, Liu L, Zhu F, Zhao F, Zhang Z, Yao Y, Qin J, Shao L (2020) Region graph embedding network for zero-shot learning. In: ECCV. Springer, pp 562–580
5.
Zurück zum Zitat Zhang C, Bengio S, Hardt M, Recht B, Vinyals O (2021) Understanding deep learning (still) requires rethinking generalization. Commun ACM 64(3):107–115CrossRef Zhang C, Bengio S, Hardt M, Recht B, Vinyals O (2021) Understanding deep learning (still) requires rethinking generalization. Commun ACM 64(3):107–115CrossRef
6.
Zurück zum Zitat Liu T, Tao D (2015) Classification with noisy labels by importance reweighting. IEEE Trans Pattern Anal Mach Intell 38(3):447–461CrossRef Liu T, Tao D (2015) Classification with noisy labels by importance reweighting. IEEE Trans Pattern Anal Mach Intell 38(3):447–461CrossRef
7.
Zurück zum Zitat Li X, Liu T, Han B, Niu G, Sugiyama M (2021) Provably end-to-end label-noise learning without anchor points. In: ICML. PMLR, pp 6403–6413 Li X, Liu T, Han B, Niu G, Sugiyama M (2021) Provably end-to-end label-noise learning without anchor points. In: ICML. PMLR, pp 6403–6413
8.
Zurück zum Zitat Ghosh A, Kumar H, Sastry PS (2017) Robust loss functions under label noise for deep neural networks. In: Proceedings of the AAAI conference on artificial intelligence, vol 31, no 1 Ghosh A, Kumar H, Sastry PS (2017) Robust loss functions under label noise for deep neural networks. In: Proceedings of the AAAI conference on artificial intelligence, vol 31, no 1
9.
Zurück zum Zitat Zhang Z, Sabuncu M (2018) Generalized cross entropy loss for training deep neural networks with noisy labels. Adv Neural Inform Process Syst 31 Zhang Z, Sabuncu M (2018) Generalized cross entropy loss for training deep neural networks with noisy labels. Adv Neural Inform Process Syst 31
10.
Zurück zum Zitat Wang Y, Ma X, Chen Z, Luo Y, Yi J, Bailey J (2019) Symmetric cross entropy for robust learning with noisy labels. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 322–330 Wang Y, Ma X, Chen Z, Luo Y, Yi J, Bailey J (2019) Symmetric cross entropy for robust learning with noisy labels. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 322–330
11.
Zurück zum Zitat Zhou X, Liu X, Jiang J, Gao X, Ji X (2021) Asymmetric loss functions for learning with noisy labels. In: Proceedings of the international conference on machine learning, PMLR, pp 12846–12856 Zhou X, Liu X, Jiang J, Gao X, Ji X (2021) Asymmetric loss functions for learning with noisy labels. In: Proceedings of the international conference on machine learning, PMLR, pp 12846–12856
12.
Zurück zum Zitat Ren M, Zeng W, Yang B, Urtasun R (2018) Learning to reweight examples for robust deep learning. In: International conference on machine learning. PMLR, pp 4334–4343 Ren M, Zeng W, Yang B, Urtasun R (2018) Learning to reweight examples for robust deep learning. In: International conference on machine learning. PMLR, pp 4334–4343
13.
Zurück zum Zitat Jiang L, Zhou Z, Leung T, Li LJ, Fei-Fei L (2018) Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. International conference on machine learning. PMLR, Stockholm, pp 2304–2313 Jiang L, Zhou Z, Leung T, Li LJ, Fei-Fei L (2018) Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. International conference on machine learning. PMLR, Stockholm, pp 2304–2313
14.
Zurück zum Zitat Ibrahim M (2020) An empirical comparison of random forest-based and other learning-to-rank algorithms. Pattern Anal Appl 23(3):1133–1155CrossRef Ibrahim M (2020) An empirical comparison of random forest-based and other learning-to-rank algorithms. Pattern Anal Appl 23(3):1133–1155CrossRef
15.
Zurück zum Zitat Settouti N et al. An instance and variable selection approach in pixel-based classification for automatic white blood cells segmentation. Pattern Anal Appl 23(4):1709–1726 Settouti N et al. An instance and variable selection approach in pixel-based classification for automatic white blood cells segmentation. Pattern Anal Appl 23(4):1709–1726
16.
Zurück zum Zitat Wei H, Feng L, Chen X, An B (2020) Combating noisy labels by agreement: a joint training method with co-regularization. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 13726–13735 Wei H, Feng L, Chen X, An B (2020) Combating noisy labels by agreement: a joint training method with co-regularization. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 13726–13735
17.
Zurück zum Zitat Malach E, Shalev-Shwartz S (2017) Decoupling “when to update” from “how to update”. Adv Neural Inform Process Syst 30 Malach E, Shalev-Shwartz S (2017) Decoupling “when to update” from “how to update”. Adv Neural Inform Process Syst 30
18.
Zurück zum Zitat Han B, Yao Q, Yu X, Niu G, Xu M, Hu W, Tsang I, Sugiyama M (2018) Co-teaching: robust training of deep neural networks with extremely noisy labels. In: Advances in neural information processing systems. Neural Information Processing Systems Foundation, Inc., vol 31, pp 8527–8537 Han B, Yao Q, Yu X, Niu G, Xu M, Hu W, Tsang I, Sugiyama M (2018) Co-teaching: robust training of deep neural networks with extremely noisy labels. In: Advances in neural information processing systems. Neural Information Processing Systems Foundation, Inc., vol 31, pp 8527–8537
19.
Zurück zum Zitat Yu X, Han B, Yao J, Niu G, Tsang I, Sugiyama M (2019) How does disagreement help generalization against label corruption? In: International Conference on Machine Learning. PMLR, pp 7164-7173 Yu X, Han B, Yao J, Niu G, Tsang I, Sugiyama M (2019) How does disagreement help generalization against label corruption? In: International Conference on Machine Learning. PMLR, pp 7164-7173
20.
Zurück zum Zitat Wang X, Hua Y, Kodirov E, Clifton DA, Robertson NM (2021) Proselflc: Progressive self label correction for training robust deep neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 752–761 Wang X, Hua Y, Kodirov E, Clifton DA, Robertson NM (2021) Proselflc: Progressive self label correction for training robust deep neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 752–761
21.
Zurück zum Zitat Azad-Manjiri M, Amiri A, Saleh Sedghpour A (2020) ML-SLSTSVM: a new structural least square twin support vector machine for multi-label learning. Pattern Anal Appl 23(1):295–308MathSciNetCrossRef Azad-Manjiri M, Amiri A, Saleh Sedghpour A (2020) ML-SLSTSVM: a new structural least square twin support vector machine for multi-label learning. Pattern Anal Appl 23(1):295–308MathSciNetCrossRef
22.
Zurück zum Zitat Zhu J, Zhang J, Han B, Liu T, Niu G, Yang H, Kankanhalli M, Sugiyama M (2021) Understanding the interaction of adversarial training with noisy labels. arXiv preprint arXiv:2102.03482 Zhu J, Zhang J, Han B, Liu T, Niu G, Yang H, Kankanhalli M, Sugiyama M (2021) Understanding the interaction of adversarial training with noisy labels. arXiv preprint arXiv:​2102.​03482
23.
Zurück zum Zitat Bootkrajang J, Chaijaruwanich J (2020) Towards instance-dependent label noise-tolerant classification: a probabilistic approach. Pattern Anal Appl 23(1):95–111MathSciNetCrossRef Bootkrajang J, Chaijaruwanich J (2020) Towards instance-dependent label noise-tolerant classification: a probabilistic approach. Pattern Anal Appl 23(1):95–111MathSciNetCrossRef
24.
Zurück zum Zitat Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2818–2826 Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2818–2826
25.
Zurück zum Zitat Müller R, Kornblith S, Hinton GE (2019) When does label smoothing help? Adv Neural Inform Process Syst 32 Müller R, Kornblith S, Hinton GE (2019) When does label smoothing help? Adv Neural Inform Process Syst 32
26.
Zurück zum Zitat Pereyra G, Tucker G, Chorowski J, Kaiser Ł, Hinton G (2017) Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548 Pereyra G, Tucker G, Chorowski J, Kaiser Ł, Hinton G (2017) Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:​1701.​06548
27.
Zurück zum Zitat Tanaka D, Ikami D, Yamasaki T, Aizawa K (2018) Joint optimization framework for learning with noisy labels. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 5552–5560 Tanaka D, Ikami D, Yamasaki T, Aizawa K (2018) Joint optimization framework for learning with noisy labels. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 5552–5560
28.
Zurück zum Zitat Lee DH, Zhang S, Lee SW (2013) Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In: Workshop on challenges in representation learning. ICML, vol 3, no 2, pp 896 Lee DH, Zhang S, Lee SW (2013) Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In: Workshop on challenges in representation learning. ICML, vol 3, no 2, pp 896
29.
Zurück zum Zitat Reed S, Lee H, Anguelov D, Szegedy C, Erhan D, Rabinovich A (2014) Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596 Reed S, Lee H, Anguelov D, Szegedy C, Erhan D, Rabinovich A (2014) Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:​1412.​6596
31.
Zurück zum Zitat Zhang M, Xu N, Geng X (2022) Feature-induced label distribution for learning with noisy labels. Pattern Recogn Lett 155:107–113 Zhang M, Xu N, Geng X (2022) Feature-induced label distribution for learning with noisy labels. Pattern Recogn Lett 155:107–113
32.
Zurück zum Zitat Wang M, Yu H-T, Min F (2021) Noise label learning through label confidence statistical inference. Knowl-Based Syst 227:107234 Wang M, Yu H-T, Min F (2021) Noise label learning through label confidence statistical inference. Knowl-Based Syst 227:107234
33.
Zurück zum Zitat Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1-9 Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1-9
34.
Zurück zum Zitat Yao Y, Sun Z, Zhang C, Shen F, Wu Q, Zhang J, Tang Z (2021) Jo-src: A contrastive approach for combating noisy labels. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 5192–5201 Yao Y, Sun Z, Zhang C, Shen F, Wu Q, Zhang J, Tang Z (2021) Jo-src: A contrastive approach for combating noisy labels. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 5192–5201
35.
Zurück zum Zitat Arpit D, Jastrzębski S, Ballas N, Krueger D, Bengio E, Kanwal MS, Maharaj T, Fischer A, Courville A, Bengio Y et al (2017) A closer look at memorization in deep networks. In: International conference on machine learning. PMLR, pp 233–242 Arpit D, Jastrzębski S, Ballas N, Krueger D, Bengio E, Kanwal MS, Maharaj T, Fischer A, Courville A, Bengio Y et al (2017) A closer look at memorization in deep networks. In: International conference on machine learning. PMLR, pp 233–242
37.
Zurück zum Zitat Blum A, Mitchell T (1998) Combining labeled and unlabeled data with co-training. In: Proceedings of the eleventh annual conference on Computational learning theory. pp 92–100 Blum A, Mitchell T (1998) Combining labeled and unlabeled data with co-training. In: Proceedings of the eleventh annual conference on Computational learning theory. pp 92–100
38.
Zurück zum Zitat Krizhevsky A, Hinton G et al (2009) Learning multiple layers of features from tiny images. ON, Canada, Toronto Krizhevsky A, Hinton G et al (2009) Learning multiple layers of features from tiny images. ON, Canada, Toronto
39.
Zurück zum Zitat Xiao T, Xia T, Yang Y, Huang C, Wang X (2015) Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2691–2699 Xiao T, Xia T, Yang Y, Huang C, Wang X (2015) Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2691–2699
40.
Zurück zum Zitat Lee KH, He X, Zhang L, Yang L (2018) Cleannet: transfer learning for scalable image classifier training with label noise. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 5447–5456 Lee KH, He X, Zhang L, Yang L (2018) Cleannet: transfer learning for scalable image classifier training with label noise. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 5447–5456
41.
Zurück zum Zitat Patrini G, Rozza A, Krishna Menon A, Nock R, Qu L (2017) Making deep neural networks robust to label noise: a loss correction approach. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1944–1952 Patrini G, Rozza A, Krishna Menon A, Nock R, Qu L (2017) Making deep neural networks robust to label noise: a loss correction approach. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1944–1952
42.
Zurück zum Zitat Han J, Luo P, Wang X (2019) Deep self-learning from noisy labels. In: Proceedings of the IEEE/CVF international conference on computer vision. pp 5138–5147 Han J, Luo P, Wang X (2019) Deep self-learning from noisy labels. In: Proceedings of the IEEE/CVF international conference on computer vision. pp 5138–5147
43.
Zurück zum Zitat de Aquino Afonso BK, Berton L (2020) Identifying noisy labels with a transductive semi-supervised leave-one-out filter. Pattern Recogn Lett 140:127–134CrossRef de Aquino Afonso BK, Berton L (2020) Identifying noisy labels with a transductive semi-supervised leave-one-out filter. Pattern Recogn Lett 140:127–134CrossRef
44.
Zurück zum Zitat Fouquet EG, Fauvel M, Mallet C (2021) Fast estimation for robust supervised classification with mixture models. Pattern Recogn Lett 152:320–326CrossRef Fouquet EG, Fauvel M, Mallet C (2021) Fast estimation for robust supervised classification with mixture models. Pattern Recogn Lett 152:320–326CrossRef
45.
Zurück zum Zitat Flores JL, Calvo B, Pérez A (2022) Non-parametric discretization for probabilistic labeled data. Pattern Recogn Lett 161:52–58CrossRef Flores JL, Calvo B, Pérez A (2022) Non-parametric discretization for probabilistic labeled data. Pattern Recogn Lett 161:52–58CrossRef
46.
Zurück zum Zitat Ma D, Zhou Y, Zhao J, Chen Y, Yao R, Chen H (2021) Video-based person re-identification by semi-supervised adaptive stepwise learning. Pattern Anal Appl 24(4):1769–1776CrossRef Ma D, Zhou Y, Zhao J, Chen Y, Yao R, Chen H (2021) Video-based person re-identification by semi-supervised adaptive stepwise learning. Pattern Anal Appl 24(4):1769–1776CrossRef
Metadaten
Titel
Self-label correction for image classification with noisy labels
verfasst von
Yu Zhang
Fan Lin
Siya Mi
Yali Bian
Publikationsdatum
15.06.2023
Verlag
Springer London
Erschienen in
Pattern Analysis and Applications / Ausgabe 3/2023
Print ISSN: 1433-7541
Elektronische ISSN: 1433-755X
DOI
https://doi.org/10.1007/s10044-023-01180-w

Weitere Artikel der Ausgabe 3/2023

Pattern Analysis and Applications 3/2023 Zur Ausgabe

Premium Partner