Skip to main content
Erschienen in: International Journal of Computer Vision 6/2020

08.01.2020

Fine-Grained Person Re-identification

verfasst von: Jiahang Yin, Ancong Wu, Wei-Shi Zheng

Erschienen in: International Journal of Computer Vision | Ausgabe 6/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Person re-identification (re-id) plays a critical role in tracking people via surveillance systems by matching people across non-overlapping camera views at different locations. Although most re-id methods largely depend on the appearance features of a person, such methods always assume that the appearance information (particularly color) is distinguishable. However, distinguishing people who dress in very similar clothes (especially the same type of clothes, e.g. uniform) is ineffective if relying only on appearance cues. We call this problem the fine-grained person re-identification (FG re-id) problem. To solve this problem, rather than relying on clothing color, we propose to exploit two types of local dynamic pose features: motion-attentive local dynamic pose feature and joint-specific local dynamic pose feature. They are complementary to each other and describe identity-specific pose characteristics, which are found to be more unique and discriminative against similar appearance between people. A deep neural network is formed to learn these local dynamic pose features and to jointly quantify motion and global visual cues. Due to the lack of a suitable benchmark dataset for evaluating the FG re-id problem, we also contribute a fine-grained person re-identification (FGPR) dataset, which contains 358 identities. Extensive evaluations on the FGPR dataset show that our proposed model achieves the best performance compared with related person re-id and fine-grained recognition methods for FG re-id. In addition, we verify that our method is still effective for conventional video-based person re-id.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Ahmed, E., Jones, M., & Marks, T. K. (2015). An improved deep learning architecture for person re-identification. In CVPR. Ahmed, E., Jones, M., & Marks, T. K. (2015). An improved deep learning architecture for person re-identification. In CVPR.
Zurück zum Zitat Branson, S., Van Horn, G., Belongie, S., & Perona, P. (2014). Bird species categorization using pose normalized deep convolutional nets. arXiv preprint arXiv:1406.2952. Branson, S., Van Horn, G., Belongie, S., & Perona, P. (2014). Bird species categorization using pose normalized deep convolutional nets. arXiv preprint arXiv:​1406.​2952.
Zurück zum Zitat Cao, Z., Simon, T., Wei, S. E., & Sheikh, Y. (2017). Realtime multi-person 2D pose estimation using part affinity fields. In CVPR. Cao, Z., Simon, T., Wei, S. E., & Sheikh, Y. (2017). Realtime multi-person 2D pose estimation using part affinity fields. In CVPR.
Zurück zum Zitat Cheng, D. S., Cristani, M., Stoppa, M., Bazzani, L., & Murino, V. (2011). Custom pictorial structures for re-identification. In BMVC. Cheng, D. S., Cristani, M., Stoppa, M., Bazzani, L., & Murino, V. (2011). Custom pictorial structures for re-identification. In BMVC.
Zurück zum Zitat Chung, D., Tahboub, K., & Delp, E. J. (2017). A two stream siamese convolutional neural network for person re-identification. In CVPR. Chung, D., Tahboub, K., & Delp, E. J. (2017). A two stream siamese convolutional neural network for person re-identification. In CVPR.
Zurück zum Zitat Dai, J., Zhang, P., Wang, D., Lu, H., & Wang, H. (2018). Video person re-identification by temporal residual learning. IEEE Transactions on Image Processing, 28, 1366–1377.MathSciNetCrossRef Dai, J., Zhang, P., Wang, D., Lu, H., & Wang, H. (2018). Video person re-identification by temporal residual learning. IEEE Transactions on Image Processing, 28, 1366–1377.MathSciNetCrossRef
Zurück zum Zitat Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In CVPR. Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In CVPR.
Zurück zum Zitat Farenzena, M., Bazzani, L., Perina, A., Murino, V., & Cristani, M. (2010). Person re-identification by symmetry-driven accumulation of local features. In CVPR. Farenzena, M., Bazzani, L., Perina, A., Murino, V., & Cristani, M. (2010). Person re-identification by symmetry-driven accumulation of local features. In CVPR.
Zurück zum Zitat Fu, J., Zheng, H., & Mei, T. (2017). Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition. In CVPR. Fu, J., Zheng, H., & Mei, T. (2017). Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition. In CVPR.
Zurück zum Zitat Ge, Y., Li, Z., Zhao, H., Yin, G., Yi, S., Wang, X., et al. (2018). FD-GAN: Pose-guided feature distilling GAN for robust person re-identification. In NIPS. Ge, Y., Li, Z., Zhao, H., Yin, G., Yi, S., Wang, X., et al. (2018). FD-GAN: Pose-guided feature distilling GAN for robust person re-identification. In NIPS.
Zurück zum Zitat Gou, M., Zhang, X., Rates-Borras, A., Asghari-Esfeden, S., Sznaier, M., & Camps, O. (2016). Person re-identification in appearance impaired scenarios. In BMVC. Gou, M., Zhang, X., Rates-Borras, A., Asghari-Esfeden, S., Sznaier, M., & Camps, O. (2016). Person re-identification in appearance impaired scenarios. In BMVC.
Zurück zum Zitat Gray, D., Brennan, S., & Tao, H. (2007). Evaluating appearance models for recognition, reacquisition, and tracking. In PETS. Gray, D., Brennan, S., & Tao, H. (2007). Evaluating appearance models for recognition, reacquisition, and tracking. In PETS.
Zurück zum Zitat Guo, Z., Zhang, L., & Zhang, D. (2010). A completed modeling of local binary pattern operator for texture classification. IEEE Transactions on Image Processing, 19, 1657–1663.MathSciNetCrossRef Guo, Z., Zhang, L., & Zhang, D. (2010). A completed modeling of local binary pattern operator for texture classification. IEEE Transactions on Image Processing, 19, 1657–1663.MathSciNetCrossRef
Zurück zum Zitat He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask R-CNN. In ICCV. He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask R-CNN. In ICCV.
Zurück zum Zitat He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In CVPR. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In CVPR.
Zurück zum Zitat Hirzer, M., Beleznai, C., Roth, P. M., & Bischof, H. (2011). Person re-identification by descriptive and discriminative classification. In SCIA. Hirzer, M., Beleznai, C., Roth, P. M., & Bischof, H. (2011). Person re-identification by descriptive and discriminative classification. In SCIA.
Zurück zum Zitat Huang, S., Xu, Z., Tao, D., & Zhang, Y. (2016). Part-stacked CNN for fine-grained visual categorization. In CVPR. Huang, S., Xu, Z., Tao, D., & Zhang, Y. (2016). Part-stacked CNN for fine-grained visual categorization. In CVPR.
Zurück zum Zitat Johnson, J., Karpathy, A., & Fei-Fei, L. (2016). DenseCap: Fully convolutional localization networks for dense captioning. In CVPR. Johnson, J., Karpathy, A., & Fei-Fei, L. (2016). DenseCap: Fully convolutional localization networks for dense captioning. In CVPR.
Zurück zum Zitat Koestinger, M., Hirzer, M., Wohlhart, P., Roth, P. M., & Bischof, H. (2012). Large scale metric learning from equivalence constraints. In CVPR. Koestinger, M., Hirzer, M., Wohlhart, P., Roth, P. M., & Bischof, H. (2012). Large scale metric learning from equivalence constraints. In CVPR.
Zurück zum Zitat Kviatkovsky, I., Adam, A., & Rivlin, E. (2013). Color invariants for person reidentification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 1622–1634.CrossRef Kviatkovsky, I., Adam, A., & Rivlin, E. (2013). Color invariants for person reidentification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 1622–1634.CrossRef
Zurück zum Zitat Li, S., Bak, S., Carr, P., & Wang, X. (2018a). Diversity regularized spatiotemporal attention for video-based person re-identification. In CVPR. Li, S., Bak, S., Carr, P., & Wang, X. (2018a). Diversity regularized spatiotemporal attention for video-based person re-identification. In CVPR.
Zurück zum Zitat Li, W., Zhao, R., & Wang, X. (2012). Human reidentification with transferred metric learning. In ACCV. Li, W., Zhao, R., & Wang, X. (2012). Human reidentification with transferred metric learning. In ACCV.
Zurück zum Zitat Li, W., Zhao, R., Xiao, T., & Wang, X. (2014). DeepReID: Deep filter pairing neural network for person re-identification. In CVPR. Li, W., Zhao, R., Xiao, T., & Wang, X. (2014). DeepReID: Deep filter pairing neural network for person re-identification. In CVPR.
Zurück zum Zitat Li, Z., Chang, S., Liang, F., Huang, T. S., Cao, L., & Smith, J. R. (2013). Learning locally-adaptive decision functions for person verification. In CVPR. Li, Z., Chang, S., Liang, F., Huang, T. S., Cao, L., & Smith, J. R. (2013). Learning locally-adaptive decision functions for person verification. In CVPR.
Zurück zum Zitat Liao, S., Hu, Y., Zhu, X., & Li, S. Z. (2015). Person re-identification by local maximal occurrence representation and metric learning. In CVPR. Liao, S., Hu, Y., Zhu, X., & Li, S. Z. (2015). Person re-identification by local maximal occurrence representation and metric learning. In CVPR.
Zurück zum Zitat Lin, T. Y., RoyChowdhury, A., & Maji, S. (2015). Bilinear CNN models for fine-grained visual recognition. In ICCV. Lin, T. Y., RoyChowdhury, A., & Maji, S. (2015). Bilinear CNN models for fine-grained visual recognition. In ICCV.
Zurück zum Zitat Liu, J., Ni, B., Yan, Y., Zhou, P., & Cheng, S., Hu, J. (2018). Pose transferrable person re-identification. In CVPR. Liu, J., Ni, B., Yan, Y., Zhou, P., & Cheng, S., Hu, J. (2018). Pose transferrable person re-identification. In CVPR.
Zurück zum Zitat Liu, X., Wang, J., Wen, S., Ding, E., & Lin, Y. (2017a). Localizing by describing: Attribute-guided attention localization for fine-grained recognition. In AAAI. Liu, X., Wang, J., Wen, S., Ding, E., & Lin, Y. (2017a). Localizing by describing: Attribute-guided attention localization for fine-grained recognition. In AAAI.
Zurück zum Zitat Liu, X., Zhao, H., Tian, M., Sheng, L., Shao, J., et al. (2017b). Hydraplus-net: Attentive deep features for pedestrian analysis. arXiv preprint arXiv:1709.09930. Liu, X., Zhao, H., Tian, M., Sheng, L., Shao, J., et al. (2017b). Hydraplus-net: Attentive deep features for pedestrian analysis. arXiv preprint arXiv:​1709.​09930.
Zurück zum Zitat Liu, Y., Yan, J., & Ouyang, W. (2017c). Quality aware network for set to set recognition. In CVPR. Liu, Y., Yan, J., & Ouyang, W. (2017c). Quality aware network for set to set recognition. In CVPR.
Zurück zum Zitat Liu, Y., Yuan, Z., Zhou, W., & Li, H. (2019). Spatial and temporal mutual promotion for video-based person re-identification. In AAAI. Liu, Y., Yuan, Z., Zhou, W., & Li, H. (2019). Spatial and temporal mutual promotion for video-based person re-identification. In AAAI.
Zurück zum Zitat Makihara, Y., Suzuki, A., Muramatsu, D., Li, X., & Yagi, Y. (2017). Joint intensity and spatial metric learning for robust gait recognition. In CVPR. Makihara, Y., Suzuki, A., Muramatsu, D., Li, X., & Yagi, Y. (2017). Joint intensity and spatial metric learning for robust gait recognition. In CVPR.
Zurück zum Zitat Matsukawa, T., Okabe, T., Suzuki, E., & Sato, Y. (2016). Hierarchical gaussian descriptor for person re-identification. In CVPR. Matsukawa, T., Okabe, T., Suzuki, E., & Sato, Y. (2016). Hierarchical gaussian descriptor for person re-identification. In CVPR.
Zurück zum Zitat Pumarola, A., Agudo, A., Sanfeliu, A., & Moreno-Noguer, F. (2018). Unsupervised person image synthesis in arbitrary poses. In CVPR. Pumarola, A., Agudo, A., Sanfeliu, A., & Moreno-Noguer, F. (2018). Unsupervised person image synthesis in arbitrary poses. In CVPR.
Zurück zum Zitat Qian, X., Fu, Y., Wang, W., Xiang, T., Wu, Y., Jiang, Y. G., & Xue, X. (2017). Pose-normalized image generation for person re-identification. arXiv preprint arXiv:1712.02225. Qian, X., Fu, Y., Wang, W., Xiang, T., Wu, Y., Jiang, Y. G., & Xue, X. (2017). Pose-normalized image generation for person re-identification. arXiv preprint arXiv:​1712.​02225.
Zurück zum Zitat Rida, I., Jiang, X., & Marcialis, G. L. (2016). Human body part selection by group lasso of motion for model-free gait recognition. IEEE Signal Processing Letters, 23, 154–158.CrossRef Rida, I., Jiang, X., & Marcialis, G. L. (2016). Human body part selection by group lasso of motion for model-free gait recognition. IEEE Signal Processing Letters, 23, 154–158.CrossRef
Zurück zum Zitat Ristani, E., Solera, F., Zou, R., Cucchiara, R., & Tomasi, C. (2016). Performance measures and a data set for multi-target, multi-camera tracking. In ECCV. Ristani, E., Solera, F., Zou, R., Cucchiara, R., & Tomasi, C. (2016). Performance measures and a data set for multi-target, multi-camera tracking. In ECCV.
Zurück zum Zitat Si, J., Zhang, H., Li, C. G., Kuen, J., Kong, X., Kot, A. C., & Wang, G. (2018). Dual attention matching network for context-aware feature sequence based person re-identification. arXiv preprint arXiv:1803.09937. Si, J., Zhang, H., Li, C. G., Kuen, J., Kong, X., Kot, A. C., & Wang, G. (2018). Dual attention matching network for context-aware feature sequence based person re-identification. arXiv preprint arXiv:​1803.​09937.
Zurück zum Zitat Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:​1409.​1556.
Zurück zum Zitat Song, G., Leng, B., Liu, Y., Hetang, C., & Cai, S. (2017). Region-based quality estimation network for large-scale person re-identification. arXiv preprint arXiv:1711.08766. Song, G., Leng, B., Liu, Y., Hetang, C., & Cai, S. (2017). Region-based quality estimation network for large-scale person re-identification. arXiv preprint arXiv:​1711.​08766.
Zurück zum Zitat Su, C., Li, J., Zhang, S., Xing, J., Gao, W., & Tian, Q. (2017). Pose-driven deep convolutional model for person re-identification. In ICCV (pp. 3980–3989). IEEE. Su, C., Li, J., Zhang, S., Xing, J., Gao, W., & Tian, Q. (2017). Pose-driven deep convolutional model for person re-identification. In ICCV (pp. 3980–3989). IEEE.
Zurück zum Zitat Sun, S., Kuang, Z., Sheng, L., Ouyang, W., & Zhang, W. (2018). Optical flow guided feature: A fast and robust motion representation for video action recognition. In CVPR. Sun, S., Kuang, Z., Sheng, L., Ouyang, W., & Zhang, W. (2018). Optical flow guided feature: A fast and robust motion representation for video action recognition. In CVPR.
Zurück zum Zitat Sun, Y., Zheng, L., Yang, Y., Tian, Q., & Wang, S. (2017). Beyond part models: Person retrieval with refined part pooling. arXiv preprint arXiv:1711.09349. Sun, Y., Zheng, L., Yang, Y., Tian, Q., & Wang, S. (2017). Beyond part models: Person retrieval with refined part pooling. arXiv preprint arXiv:​1711.​09349.
Zurück zum Zitat Wang, T., Gong, S., Zhu, X., & Wang, S. (2014). Person re-identification by video ranking. In ECCV. Springer. Wang, T., Gong, S., Zhu, X., & Wang, S. (2014). Person re-identification by video ranking. In ECCV. Springer.
Zurück zum Zitat Wei, L., Zhang, S., Gao, W., & Tian, Q. (2018). Person transfer GAN to bridge domain gap for person re-identification. In CVPR. Wei, L., Zhang, S., Gao, W., & Tian, Q. (2018). Person transfer GAN to bridge domain gap for person re-identification. In CVPR.
Zurück zum Zitat Wei, L., Zhang, S., Yao, H., Gao, W., & Tian, Q. (2017). GLAD: Global-local-alignment descriptor for pedestrian retrieval. In ACM MM. Wei, L., Zhang, S., Yao, H., Gao, W., & Tian, Q. (2017). GLAD: Global-local-alignment descriptor for pedestrian retrieval. In ACM MM.
Zurück zum Zitat Wu, L., Wang, Y., Gao, J., & Li, X. (2018). Where-and-when to look: Deep siamese attention networks for video-based person re-identification. IEEE Transactions on Multimedia, 21, 1412–1424.CrossRef Wu, L., Wang, Y., Gao, J., & Li, X. (2018). Where-and-when to look: Deep siamese attention networks for video-based person re-identification. IEEE Transactions on Multimedia, 21, 1412–1424.CrossRef
Zurück zum Zitat Wu, L., Wang, Y., Shao, L., & Wang, M. (2019). 3-D person VLAD: Learning deep global representations for video-based person reidentification. IEEE Transactions on Neural Networks and Learning Systems, 30, 3347–3359.CrossRef Wu, L., Wang, Y., Shao, L., & Wang, M. (2019). 3-D person VLAD: Learning deep global representations for video-based person reidentification. IEEE Transactions on Neural Networks and Learning Systems, 30, 3347–3359.CrossRef
Zurück zum Zitat Wu, Y., Lin, Y., Dong, X., Yan, Y., Ouyang, W., & Yang, Y. (2018). Exploit the unknown gradually: One-shot video-based person re-identification by stepwise learning. In CVPR. Wu, Y., Lin, Y., Dong, X., Yan, Y., Ouyang, W., & Yang, Y. (2018). Exploit the unknown gradually: One-shot video-based person re-identification by stepwise learning. In CVPR.
Zurück zum Zitat Wu, Z., Huang, Y., Wang, L., Wang, X., & Tan, T. (2017). A comprehensive study on cross-view gait based human identification with deep CNNs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 209–226.CrossRef Wu, Z., Huang, Y., Wang, L., Wang, X., & Tan, T. (2017). A comprehensive study on cross-view gait based human identification with deep CNNs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 209–226.CrossRef
Zurück zum Zitat Xiao, T., Li, H., Ouyang, W., & Wang, X. (2016). Learning deep feature representations with domain guided dropout for person re-identification. In CVPR. Xiao, T., Li, H., Ouyang, W., & Wang, X. (2016). Learning deep feature representations with domain guided dropout for person re-identification. In CVPR.
Zurück zum Zitat Xu, S., Cheng, Y., Gu, K., Yang, Y., Chang, S., & Zhou, P. (2017). Jointly attentive spatial-temporal pooling networks for video-based person re-identification. arXiv preprint arXiv:1708.02286. Xu, S., Cheng, Y., Gu, K., Yang, Y., Chang, S., & Zhou, P. (2017). Jointly attentive spatial-temporal pooling networks for video-based person re-identification. arXiv preprint arXiv:​1708.​02286.
Zurück zum Zitat Ye, M., Li, J., Ma, A. J., Zheng, L., & Yuen, P. C. (2019). Dynamic graph co-matching for unsupervised video-based person re-identification. IEEE Transactions on Image Processing, 28, 2976–2990.MathSciNetCrossRef Ye, M., Li, J., Ma, A. J., Zheng, L., & Yuen, P. C. (2019). Dynamic graph co-matching for unsupervised video-based person re-identification. IEEE Transactions on Image Processing, 28, 2976–2990.MathSciNetCrossRef
Zurück zum Zitat Yi, D., Lei, Z., Liao, S., & Li, S. Z. (2014). Deep metric learning for person re-identification. In ICPR. Yi, D., Lei, Z., Liao, S., & Li, S. Z. (2014). Deep metric learning for person re-identification. In ICPR.
Zurück zum Zitat You, J., Wu, A., Li, X., & Zheng, W. S. (2016). Top-push video-based person re-identification. In CVPR. You, J., Wu, A., Li, X., & Zheng, W. S. (2016). Top-push video-based person re-identification. In CVPR.
Zurück zum Zitat Zhang, X., Xiong, H., Zhou, W., Lin, W., & Tian, Q. (2016). Picking deep filter responses for fine-grained image recognition. In CVPR. Zhang, X., Xiong, H., Zhou, W., Lin, W., & Tian, Q. (2016). Picking deep filter responses for fine-grained image recognition. In CVPR.
Zurück zum Zitat Zhao, H., Tian, M., Sun, S., Shao, J., Yan, J., Yi, S., Wang, X., & Tang, X. (2017). Spindle net: Person re-identification with human body region guided feature decomposition and fusion. In CVPR. Zhao, H., Tian, M., Sun, S., Shao, J., Yan, J., Yi, S., Wang, X., & Tang, X. (2017). Spindle net: Person re-identification with human body region guided feature decomposition and fusion. In CVPR.
Zurück zum Zitat Zhao, R., Ouyang, W., & Wang, X. (2013). Unsupervised salience learning for person re-identification. In CVPR. Zhao, R., Ouyang, W., & Wang, X. (2013). Unsupervised salience learning for person re-identification. In CVPR.
Zurück zum Zitat Zheng, H., Fu, J., Mei, T., & Luo, J. (2017). Learning multi-attention convolutional neural network for fine-grained image recognition. In ICCV. Zheng, H., Fu, J., Mei, T., & Luo, J. (2017). Learning multi-attention convolutional neural network for fine-grained image recognition. In ICCV.
Zurück zum Zitat Zheng, L., Bie, Z., Sun, Y., Wang, J., Su, C., Wang, S., et al. (2016). Mars: A video benchmark for large-scale person re-identification. In ECCV. Springer. Zheng, L., Bie, Z., Sun, Y., Wang, J., Su, C., Wang, S., et al. (2016). Mars: A video benchmark for large-scale person re-identification. In ECCV. Springer.
Zurück zum Zitat Zheng, L., Shen, L., Tian, L., Wang, S., Wang, J., & Tian, Q. (2015). Scalable person re-identification: A benchmark. In CVPR. Zheng, L., Shen, L., Tian, L., Wang, S., Wang, J., & Tian, Q. (2015). Scalable person re-identification: A benchmark. In CVPR.
Zurück zum Zitat Zheng, L., Zhang, H., Sun, S., Chandraker, M., Yang, Y., Tian, Q., et al. (2017). Person re-identification in the wild. In CVPR. Zheng, L., Zhang, H., Sun, S., Chandraker, M., Yang, Y., Tian, Q., et al. (2017). Person re-identification in the wild. In CVPR.
Zurück zum Zitat Zheng, W. S., Gong, S., & Xiang, T. (2013). Reidentification by relative distance comparison. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 653–668.CrossRef Zheng, W. S., Gong, S., & Xiang, T. (2013). Reidentification by relative distance comparison. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 653–668.CrossRef
Zurück zum Zitat Zhou, Z., Huang, Y., Wang, W., Wang, L., & Tan, T. (2017). See the forest for the trees: Joint spatial and temporal recurrent neural networks for video-based person re-identification. In CVPR. Zhou, Z., Huang, Y., Wang, W., Wang, L., & Tan, T. (2017). See the forest for the trees: Joint spatial and temporal recurrent neural networks for video-based person re-identification. In CVPR.
Zurück zum Zitat Zhu, X., Jing, X. Y., You, X., Zhang, X., & Zhang, T. (2018). Video-based person re-identification by simultaneously learning intra-video and inter-video distance metrics. IEEE Transactions on Image Processing, 27, 5683–5695.MathSciNetCrossRef Zhu, X., Jing, X. Y., You, X., Zhang, X., & Zhang, T. (2018). Video-based person re-identification by simultaneously learning intra-video and inter-video distance metrics. IEEE Transactions on Image Processing, 27, 5683–5695.MathSciNetCrossRef
Metadaten
Titel
Fine-Grained Person Re-identification
verfasst von
Jiahang Yin
Ancong Wu
Wei-Shi Zheng
Publikationsdatum
08.01.2020
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 6/2020
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-019-01259-0

Weitere Artikel der Ausgabe 6/2020

International Journal of Computer Vision 6/2020 Zur Ausgabe