Skip to main content
Erschienen in: International Journal of Multimedia Information Retrieval 2/2024

01.06.2024 | Regular Paper

Multi-knowledge-driven enhanced module for visible-infrared cross-modal person Re-identification

verfasst von: Shihao Shan, Peixin Sun, Guoqiang Xiao, Song Wu

Erschienen in: International Journal of Multimedia Information Retrieval | Ausgabe 2/2024

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Visible-Infrared Person Re-identification (VI-ReID) is challenging in social security surveillance because the semantic gap between cross-modal data significantly reduces VI-ReID performance. To overcome this challenge, this paper proposes a novel Multi Knowledge-driven Enhancement Module (MKEM) for high-performance VI-ReID. It mainly focuses on explicitly learning appropriate transition modalities and effectively synthesizing them to reduce the burden of models learning vastly different cross-modal knowledge. The MKEM consists of a Visible Knowledge-driven Enhancement Module (VKEM) and an Infrared Knowledge-driven Enhancement Module (IKEM), which generate model knowledge-accumulating transition modalities for the visible and infrared modalities, respectively. To effectively leverage the transition modalities, the model needs to learn the original data distribution while accumulating knowledge of the transition modes; thus, a Diversity Loss is designed to guide the representation of the generated transition modalities to be diverse, which can facilitate the model’s knowledge accumulation. To prevent redundant knowledge accumulation, a Consistency Loss is proposed to maintain the semantic similarity between the original and modeled transitional modalities. Furthermore, we implemented a Bias Adjustment Strategy (BAS) to effectively adjust the gap between the head and tail categories. We evaluated our proposed MKEM on two VI-ReID benchmark datasets, SYSU-MM01 and RegDB, and the experimental results demonstrate that our method outperforms existing methods significantly. The source code of our proposed MKEM is available at https://​github.​com/​SWU-CS-MediaLab/​MKEM.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Cho Y, Kim W.J, Hong, S, & Yoon, S.E. (2022). Part-based pseudo label refinement for unsupervised person re-identification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7308–7318 Cho Y, Kim W.J, Hong, S, & Yoon, S.E. (2022). Part-based pseudo label refinement for unsupervised person re-identification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7308–7318
2.
Zurück zum Zitat Pu N, Zhong Z, Sebe N (2023). Dynamic conceptional contrastive learning for generalized category discovery. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7579–7588 Pu N, Zhong Z, Sebe N (2023). Dynamic conceptional contrastive learning for generalized category discovery. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7579–7588
3.
Zurück zum Zitat Shen W, Chen J, Shao J (2023) FOF: a fine-grained object detection and feature extraction end-to-end network. Int J Multimed Inf Retr 12(2):40 Shen W, Chen J, Shao J (2023) FOF: a fine-grained object detection and feature extraction end-to-end network. Int J Multimed Inf Retr 12(2):40
4.
Zurück zum Zitat Nan P, Zhong Z, Sebe N, Lew MS (2023) A memorizing and generalizing framework for lifelong person re-identification. IEEE Trans Pattern Anal Mach Intell 45(11):13567–13585CrossRef Nan P, Zhong Z, Sebe N, Lew MS (2023) A memorizing and generalizing framework for lifelong person re-identification. IEEE Trans Pattern Anal Mach Intell 45(11):13567–13585CrossRef
5.
Zurück zum Zitat Zou X, Song W, Zhang N, Bakker EM (2022) Multi-label modality enhanced attention based self-supervised deep cross-modal hashing. Knowl Based Syst 239:107927CrossRef Zou X, Song W, Zhang N, Bakker EM (2022) Multi-label modality enhanced attention based self-supervised deep cross-modal hashing. Knowl Based Syst 239:107927CrossRef
6.
Zurück zum Zitat He K, Nan P, Lao M, Lew MS (2023) Few-shot and meta-learning methods for image understanding: a survey. Int J Multimedia Inf Retr 12(2):14CrossRef He K, Nan P, Lao M, Lew MS (2023) Few-shot and meta-learning methods for image understanding: a survey. Int J Multimedia Inf Retr 12(2):14CrossRef
7.
Zurück zum Zitat Fattahi M, Moattar MH, Forghani Y (2023) Locally alignment based manifold learning for simultaneous feature selection and extraction in classification problems. Knowl Based Syst 259:110088CrossRef Fattahi M, Moattar MH, Forghani Y (2023) Locally alignment based manifold learning for simultaneous feature selection and extraction in classification problems. Knowl Based Syst 259:110088CrossRef
8.
Zurück zum Zitat Shan S, Xiong E, Yuan X, Wu S (2022) A knowledge-driven enhanced module for visible-infrared person re-identification. In: Pimenidis E, Angelov PP, Jayne C, Papaleonidas A, Aydin M (eds) Artificial neural networks and machine learning—ICANN 2022—31st international conference on artificial neural networks, Bristol, UK, 6–9 Sept 2022, Proceedings, Part I, volume 13529 of lecture notes in computer science. Springer, Berlin, pp 441–453 Shan S, Xiong E, Yuan X, Wu S (2022) A knowledge-driven enhanced module for visible-infrared person re-identification. In: Pimenidis E, Angelov PP, Jayne C, Papaleonidas A, Aydin M (eds) Artificial neural networks and machine learning—ICANN 2022—31st international conference on artificial neural networks, Bristol, UK, 6–9 Sept 2022, Proceedings, Part I, volume 13529 of lecture notes in computer science. Springer, Berlin, pp 441–453
9.
Zurück zum Zitat Li D, Wei X, Hong X, Gong Y (2020) Infrared-visible cross-modal person re-identification with an x modality. In: The thirty-fourth AAAI conference on artificial intelligence (AAAI-20), pp 4610–4617 Li D, Wei X, Hong X, Gong Y (2020) Infrared-visible cross-modal person re-identification with an x modality. In: The thirty-fourth AAAI conference on artificial intelligence (AAAI-20), pp 4610–4617
10.
Zurück zum Zitat Liu S, Xiao G, Xu X, Wu S (2022) Bi-directional normalization and color attention-guided generative adversarial network for image enhancement. In: ICASSP 2022–2022 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 2205–2209 Liu S, Xiao G, Xu X, Wu S (2022) Bi-directional normalization and color attention-guided generative adversarial network for image enhancement. In: ICASSP 2022–2022 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 2205–2209
11.
Zurück zum Zitat Zhang M, Wang H, He P, Malik A, Liu H (2022) Exposing unseen GAN-generated image using unsupervised domain adaptation. Knowl Based Syst 257:109905CrossRef Zhang M, Wang H, He P, Malik A, Liu H (2022) Exposing unseen GAN-generated image using unsupervised domain adaptation. Knowl Based Syst 257:109905CrossRef
12.
Zurück zum Zitat Pu N, Chen W, Liu Y, Bakker EM, Lew MS (2020) Dual gaussian-based variational subspace disentanglement for visible-infrared person re-identification. In: Proceedings of the 28th ACM international conference on multimedia, pp 2149–2158 Pu N, Chen W, Liu Y, Bakker EM, Lew MS (2020) Dual gaussian-based variational subspace disentanglement for visible-infrared person re-identification. In: Proceedings of the 28th ACM international conference on multimedia, pp 2149–2158
13.
Zurück zum Zitat Sabahi F, Omair Ahmad M, Swamy MNS (2023) A deep image retrieval network using max-m-min pooling and morphological feature generating residual blocks. Int J Multimedia Inf Retr 12(1):8CrossRef Sabahi F, Omair Ahmad M, Swamy MNS (2023) A deep image retrieval network using max-m-min pooling and morphological feature generating residual blocks. Int J Multimedia Inf Retr 12(1):8CrossRef
14.
Zurück zum Zitat Pu N, Liu Y, Chen W, Bakker EM, Lew MS (2022) Meta reconciliation normalization for lifelong person re-identification. In: Proceedings of the 30th ACM international conference on multimedia, pp 541–549 Pu N, Liu Y, Chen W, Bakker EM, Lew MS (2022) Meta reconciliation normalization for lifelong person re-identification. In: Proceedings of the 30th ACM international conference on multimedia, pp 541–549
15.
Zurück zum Zitat Hao X, Zhao S, Ye M, Shen J (2021) Cross-modality person re-identification via modality confusion and center aggregation. In: 2021 IEEE/CVF international conference on computer vision, ICCV 2021, Montreal, QC, Canada, October 10–17, 2021. IEEE, pp 16383–16392 Hao X, Zhao S, Ye M, Shen J (2021) Cross-modality person re-identification via modality confusion and center aggregation. In: 2021 IEEE/CVF international conference on computer vision, ICCV 2021, Montreal, QC, Canada, October 10–17, 2021. IEEE, pp 16383–16392
16.
Zurück zum Zitat Wu T, Huang Q, Liu Z, Wang Y, Lin D (2020) Distribution-balanced loss for multi-label classification in long-tailed datasets. In: Vedaldi A, Bischof H, Brox T, Frahm J-M (eds) Computer vision—ECCV 2020—16th European conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV, vol 12349. Lecture notes in computer science. Springer, Berlin, pp 162–178 Wu T, Huang Q, Liu Z, Wang Y, Lin D (2020) Distribution-balanced loss for multi-label classification in long-tailed datasets. In: Vedaldi A, Bischof H, Brox T, Frahm J-M (eds) Computer vision—ECCV 2020—16th European conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV, vol 12349. Lecture notes in computer science. Springer, Berlin, pp 162–178
17.
Zurück zum Zitat Tang K, Huang J, Zhang H (2020) Long-tailed classification by keeping the good and removing the bad momentum causal effect. Adv Neural Inf Process Syst 33:1513–1524 Tang K, Huang J, Zhang H (2020) Long-tailed classification by keeping the good and removing the bad momentum causal effect. Adv Neural Inf Process Syst 33:1513–1524
18.
Zurück zum Zitat Pu N, Chen W, Liu Y, Bakker EM, Lew MS (2021) Lifelong person re-identification via adaptive knowledge accumulation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7901–7910 Pu N, Chen W, Liu Y, Bakker EM, Lew MS (2021) Lifelong person re-identification via adaptive knowledge accumulation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7901–7910
19.
Zurück zum Zitat Wang GA, Yang Tzy, Cheng J, Chang J, Liang X, Hou Z (2020) Cross-modality paired-images generation for RGB-infrared person re-identification. In: Proceedings of the AAAI conference on artificial intelligence Wang GA, Yang Tzy, Cheng J, Chang J, Liang X, Hou Z (2020) Cross-modality paired-images generation for RGB-infrared person re-identification. In: Proceedings of the AAAI conference on artificial intelligence
20.
Zurück zum Zitat Zhang Q, Lai C, Liu J, Huang N, Han J (2022) Fmcnet: Feature-level modality compensation for visible-infrared person re-identification. In: IEEE/CVF conference on computer vision and pattern recognition, CVPR 2022, New Orleans, LA, USA, June 18–24, 2022, pp 7339–7348. IEEE Zhang Q, Lai C, Liu J, Huang N, Han J (2022) Fmcnet: Feature-level modality compensation for visible-infrared person re-identification. In: IEEE/CVF conference on computer vision and pattern recognition, CVPR 2022, New Orleans, LA, USA, June 18–24, 2022, pp 7339–7348. IEEE
21.
Zurück zum Zitat Lu Y, Wu Y, Liu B, hang T, Li B, Chu Q, Yu N (2020) Cross-modality person re-identification with shared-specific feature transfer. In: 2020 IEEE/CVF conference on computer vision and pattern recognition (CVPR) Lu Y, Wu Y, Liu B, hang T, Li B, Chu Q, Yu N (2020) Cross-modality person re-identification with shared-specific feature transfer. In: 2020 IEEE/CVF conference on computer vision and pattern recognition (CVPR)
22.
Zurück zum Zitat Zhao Z, Liu B, Chu Q, Yan L, Nenghai Yu (2021) Joint color-irrelevant consistency learning and identity-aware modality adaptation for visible-infrared cross modality person re-identification. Proc AAAI Conf Artif Intell 35(4):3520–3528 Zhao Z, Liu B, Chu Q, Yan L, Nenghai Yu (2021) Joint color-irrelevant consistency learning and identity-aware modality adaptation for visible-infrared cross modality person re-identification. Proc AAAI Conf Artif Intell 35(4):3520–3528
23.
Zurück zum Zitat Choi S, Lee S, Kim Y, Kim T, Kim C (2020) Hi-CMD: hierarchical cross-modality disentanglement for visible-infrared person re-identification. In: 2020 IEEE/CVF conference on computer vision and pattern recognition (CVPR) Choi S, Lee S, Kim Y, Kim T, Kim C (2020) Hi-CMD: hierarchical cross-modality disentanglement for visible-infrared person re-identification. In: 2020 IEEE/CVF conference on computer vision and pattern recognition (CVPR)
24.
Zurück zum Zitat Huang Z, Liu J, Li L, Zheng K, Zha ZJ (2022) Modality-adaptive mixup and invariant decomposition for RGB-infrared person re-identification. In: Proceedings of the AAAI conference on artificial intelligence, vol 36, pp 1034–1042 Huang Z, Liu J, Li L, Zheng K, Zha ZJ (2022) Modality-adaptive mixup and invariant decomposition for RGB-infrared person re-identification. In: Proceedings of the AAAI conference on artificial intelligence, vol 36, pp 1034–1042
25.
Zurück zum Zitat Park H, Lee S, Lee J, Ham B (2021) Learning by aligning: visible-infrared person re-identification using cross-modal correspondences. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 12046–12055 Park H, Lee S, Lee J, Ham B (2021) Learning by aligning: visible-infrared person re-identification using cross-modal correspondences. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 12046–12055
26.
Zurück zum Zitat Xiaohui X, Liu S, Zhang N, Xiao G, Song W (2022) Channel exchange and adversarial learning guided cross-modal person re-identification. Knowl-Based Syst 257:109883CrossRef Xiaohui X, Liu S, Zhang N, Xiao G, Song W (2022) Channel exchange and adversarial learning guided cross-modal person re-identification. Knowl-Based Syst 257:109883CrossRef
27.
Zurück zum Zitat He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. IEEE He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. IEEE
28.
Zurück zum Zitat Zhu Y, Yang Z, Wang L, Zhao S, Hu X, Tao D (2019) Hetero-center loss for cross-modality person re-identification. Neurocomputing Zhu Y, Yang Z, Wang L, Zhao S, Hu X, Tao D (2019) Hetero-center loss for cross-modality person re-identification. Neurocomputing
29.
Zurück zum Zitat Sun Y, Zheng L, Yang Y, Tian Q, Wang S (2018) Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In: Proceedings of the European conference on computer vision (ECCV), pp 480–496 Sun Y, Zheng L, Yang Y, Tian Q, Wang S (2018) Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In: Proceedings of the European conference on computer vision (ECCV), pp 480–496
30.
Zurück zum Zitat Wang Y, Zhang B, Hou W, Wu Z, Wang J, Shinozaki T (2021) Margin calibration for long-tailed visual recognition. arXiv preprint arXiv:2112.07225 Wang Y, Zhang B, Hou W, Wu Z, Wang J, Shinozaki T (2021) Margin calibration for long-tailed visual recognition. arXiv preprint arXiv:​2112.​07225
31.
Zurück zum Zitat Wu A, Zheng WS, Yu HX, Gong S, Lai J (2017) RGB-infrared cross-modality person re-identification. In: 2017 IEEE international conference on computer vision (ICCV) Wu A, Zheng WS, Yu HX, Gong S, Lai J (2017) RGB-infrared cross-modality person re-identification. In: 2017 IEEE international conference on computer vision (ICCV)
32.
Zurück zum Zitat Dat N, Hong H, Ki K, Kang P (2017) Person recognition system based on a combination of body images from visible light and thermal cameras. Sensors 17(3):605CrossRef Dat N, Hong H, Ki K, Kang P (2017) Person recognition system based on a combination of body images from visible light and thermal cameras. Sensors 17(3):605CrossRef
33.
Zurück zum Zitat Ye M, Shen J, Crandall DJ, Shao L, Luo J (2020) Dynamic dual-attentive aggregation learning for visible-infrared person re-identification. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVII 16. Springer International publishing, pp 229–247 Ye M, Shen J, Crandall DJ, Shao L, Luo J (2020) Dynamic dual-attentive aggregation learning for visible-infrared person re-identification. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVII 16. Springer International publishing, pp 229–247
34.
Zurück zum Zitat Ye M, Shen J, Lin G, Xiang T, Hoi S (2021) Deep learning for person re-identification: a survey and outlook. IEEE Trans Pattern Anal Mach Intell PP(99):1 Ye M, Shen J, Lin G, Xiang T, Hoi S (2021) Deep learning for person re-identification: a survey and outlook. IEEE Trans Pattern Anal Mach Intell PP(99):1
35.
Zurück zum Zitat Chen Y, Wan L, Li, Z, Jing Q, Sun Z (2021) Neural feature search for rgb-infrared person re-identification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 587–597 Chen Y, Wan L, Li, Z, Jing Q, Sun Z (2021) Neural feature search for rgb-infrared person re-identification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 587–597
36.
Zurück zum Zitat Tian X, Zhang Z, Lin S, Qu Y, Xie Y, Ma L (2021) Farewell to mutual information: variational distillation for cross-modal person re-identification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), June, pp 1522–1531 Tian X, Zhang Z, Lin S, Qu Y, Xie Y, Ma L (2021) Farewell to mutual information: variational distillation for cross-modal person re-identification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), June, pp 1522–1531
37.
Zurück zum Zitat Chen C, Ye M, Qi M, Jingjing W, Jiang J, Lin C-W (2022) Structure-aware positional transformer for visible-infrared person re-identification. IEEE Trans Image Process 31:2352–2364CrossRef Chen C, Ye M, Qi M, Jingjing W, Jiang J, Lin C-W (2022) Structure-aware positional transformer for visible-infrared person re-identification. IEEE Trans Image Process 31:2352–2364CrossRef
38.
Zurück zum Zitat Yang M, Huang Z, Hu P, Li T, Lv J, Peng X (2022) Learning with twin noisy labels for visible-infrared person re-identification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 14308–14317 Yang M, Huang Z, Hu P, Li T, Lv J, Peng X (2022) Learning with twin noisy labels for visible-infrared person re-identification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 14308–14317
39.
Zurück zum Zitat Weipeng H, Liu B, Zeng H, Hou Y, Haifeng H (2022) Adversarial decoupling and modality-invariant representation learning for visible-infrared person re-identification. IEEE Trans Circuits Syst Video Technol 32(8):5095–5109CrossRef Weipeng H, Liu B, Zeng H, Hou Y, Haifeng H (2022) Adversarial decoupling and modality-invariant representation learning for visible-infrared person re-identification. IEEE Trans Circuits Syst Video Technol 32(8):5095–5109CrossRef
40.
Zurück zum Zitat Liu J, Wang J, Huang N, Zhang Q, Han J (2022) Revisiting modality-specific feature compensation for visible-infrared person re-identification. IEEE Trans Circuits Syst Video Technol 32(10):7226–7240CrossRef Liu J, Wang J, Huang N, Zhang Q, Han J (2022) Revisiting modality-specific feature compensation for visible-infrared person re-identification. IEEE Trans Circuits Syst Video Technol 32(10):7226–7240CrossRef
41.
Zurück zum Zitat Lu H, Zou X, Zhang P (2023) Learning progressive modality-shared transformers for effective visible-infrared person re-identification. In: Proceedings of the AAAI conference on artificial intelligence, vol 37, pp 1835–1843 Lu H, Zou X, Zhang P (2023) Learning progressive modality-shared transformers for effective visible-infrared person re-identification. In: Proceedings of the AAAI conference on artificial intelligence, vol 37, pp 1835–1843
42.
Zurück zum Zitat Si T, He F, Li P, Gao X (2023) Tri-modality consistency optimization with heterogeneous augmented images for visible-infrared person re-identification. Neurocomputing 523:170–181CrossRef Si T, He F, Li P, Gao X (2023) Tri-modality consistency optimization with heterogeneous augmented images for visible-infrared person re-identification. Neurocomputing 523:170–181CrossRef
43.
Zurück zum Zitat Liu H, Ma S, Xia D, Li S (2023) SFANet: a spectrum-aware feature augmentation network for visible-infrared person reidentification. IEEE Trans Neural Netw Learn Syst 34(4):1958–1971CrossRef Liu H, Ma S, Xia D, Li S (2023) SFANet: a spectrum-aware feature augmentation network for visible-infrared person reidentification. IEEE Trans Neural Netw Learn Syst 34(4):1958–1971CrossRef
44.
Zurück zum Zitat He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. IEEE He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. IEEE
45.
Zurück zum Zitat Laurens VDM, Hinton G (2008) Visualizing data using t-SNE. J Mach Learn Res 9(2605):2579–2605 Laurens VDM, Hinton G (2008) Visualizing data using t-SNE. J Mach Learn Res 9(2605):2579–2605
46.
Zurück zum Zitat Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-CAM: visual explanations from deep networks via gradient-based localization. In: IEEE international conference on computer vision Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-CAM: visual explanations from deep networks via gradient-based localization. In: IEEE international conference on computer vision
Metadaten
Titel
Multi-knowledge-driven enhanced module for visible-infrared cross-modal person Re-identification
verfasst von
Shihao Shan
Peixin Sun
Guoqiang Xiao
Song Wu
Publikationsdatum
01.06.2024
Verlag
Springer London
Erschienen in
International Journal of Multimedia Information Retrieval / Ausgabe 2/2024
Print ISSN: 2192-6611
Elektronische ISSN: 2192-662X
DOI
https://doi.org/10.1007/s13735-024-00327-7

Weitere Artikel der Ausgabe 2/2024

International Journal of Multimedia Information Retrieval 2/2024 Zur Ausgabe

Premium Partner