Skip to main content

2020 | OriginalPaper | Buchkapitel

A Bottom-Up Approach for Pig Skeleton Extraction Using RGB Data

verfasst von : Akif Quddus Khan, Salman Khan, Mohib Ullah, Faouzi Alaya Cheikh

Erschienen in: Image and Signal Processing

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Animal behavior analysis is a crucial task for the industrial farming. In an indoor farm setting, extracting Key joints of animals is essential for tracking the animal for a longer period of time. In this paper, we proposed a deep network that exploits transfer learning to train the network for the pig skeleton extraction in an end to end fashion. The backbone of the architecture is based on an hourglass stacked dense-net. In order to train the network, keyframes are selected from the test data using K-mean sampler. In total, 9 Keypoints are annotated that gives a brief detailed behavior analysis in the farm setting. Extensive experiments are conducted and the quantitative results show that the network has the potential of increasing the tracking performance by a substantial margin.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Khan, S.D., et al. Disam: density independent and scale aware model for crowd counting and localization. In: 2019 IEEE International Conference on Image Processing (ICIP), pp. 4474–4478. IEEE (2019) Khan, S.D., et al. Disam: density independent and scale aware model for crowd counting and localization. In: 2019 IEEE International Conference on Image Processing (ICIP), pp. 4474–4478. IEEE (2019)
2.
Zurück zum Zitat Ullah, H., Altamimi, A.B., Uzair, M., Ullah, M.: Anomalous entities detection and localization in pedestrian flows. Neurocomputing 290, 74–86 (2018)CrossRef Ullah, H., Altamimi, A.B., Uzair, M., Ullah, M.: Anomalous entities detection and localization in pedestrian flows. Neurocomputing 290, 74–86 (2018)CrossRef
3.
Zurück zum Zitat Yang, J., Shi, Z., Ziyan, W.: Vision-based action recognition of construction workers using dense trajectories. Adv. Eng. Inform. 30(3), 327–336 (2016)CrossRef Yang, J., Shi, Z., Ziyan, W.: Vision-based action recognition of construction workers using dense trajectories. Adv. Eng. Inform. 30(3), 327–336 (2016)CrossRef
4.
Zurück zum Zitat Khan, S.D., et al.: Person head detection based deep model for people counting in sports videos. In: 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1–8. IEEE (2019) Khan, S.D., et al.: Person head detection based deep model for people counting in sports videos. In: 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1–8. IEEE (2019)
5.
Zurück zum Zitat Ullah, M., Ullah, H., Conci, N., De Natale, F.G.B.: Crowd behavior identification. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 1195–1199. IEEE (2016) Ullah, M., Ullah, H., Conci, N., De Natale, F.G.B.: Crowd behavior identification. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 1195–1199. IEEE (2016)
7.
Zurück zum Zitat Maselyne, J., et al.: Measuring the drinking behaviour of individual pigs housed in group using radio frequency identification (RFID). Animal 10(9), 1557–1566 (2016)CrossRef Maselyne, J., et al.: Measuring the drinking behaviour of individual pigs housed in group using radio frequency identification (RFID). Animal 10(9), 1557–1566 (2016)CrossRef
8.
Zurück zum Zitat Ullah, M., Ullah, H., Khan, S.D., Cheikh, F.A.: Stacked LSTM network for human activity recognition using smartphone data. In: 2019 8th European Workshop on Visual Information Processing (EUVIP), pp. 175–180. IEEE (2019) Ullah, M., Ullah, H., Khan, S.D., Cheikh, F.A.: Stacked LSTM network for human activity recognition using smartphone data. In: 2019 8th European Workshop on Visual Information Processing (EUVIP), pp. 175–180. IEEE (2019)
9.
Zurück zum Zitat Pray, I.W., et al.: GPS tracking of free-ranging pigs to evaluate ring strategies for the control of cysticercosis/taeniasis in Peru. PLoS Negl. Trop. Dis. 10(4), e0004591 (2016)CrossRef Pray, I.W., et al.: GPS tracking of free-ranging pigs to evaluate ring strategies for the control of cysticercosis/taeniasis in Peru. PLoS Negl. Trop. Dis. 10(4), e0004591 (2016)CrossRef
10.
Zurück zum Zitat Alreshidi, A., Ullah, M.: Facial emotion recognition using hybrid features. In: Informatics, vol. 7, p. 6. Multidisciplinary Digital Publishing Institute (2020) Alreshidi, A., Ullah, M.: Facial emotion recognition using hybrid features. In: Informatics, vol. 7, p. 6. Multidisciplinary Digital Publishing Institute (2020)
11.
Zurück zum Zitat Chen, J., Li, K., Deng, Q., Li, K., Philip, S.Y.: Distributed deep learning model for intelligent video surveillance systems with edge computing. IEEE Trans. Ind. Inform. (2019) Chen, J., Li, K., Deng, Q., Li, K., Philip, S.Y.: Distributed deep learning model for intelligent video surveillance systems with edge computing. IEEE Trans. Ind. Inform. (2019)
12.
Zurück zum Zitat Ullah, H.: Crowd motion analysis: segmentation, anomaly detection, and behavior classification. Ph.D. thesis, University of Trento (2015) Ullah, H.: Crowd motion analysis: segmentation, anomaly detection, and behavior classification. Ph.D. thesis, University of Trento (2015)
13.
Zurück zum Zitat Hu, Y.-T., Huang, J.-B., Schwing, A.: MaskRNN: instance level video object segmentation. In: Advances in Neural Information Processing Systems, pp. 325–334 (2017) Hu, Y.-T., Huang, J.-B., Schwing, A.: MaskRNN: instance level video object segmentation. In: Advances in Neural Information Processing Systems, pp. 325–334 (2017)
14.
Zurück zum Zitat Ullah, M., Ullah, H., Alseadonn, I.M.: Human action recognition in videos using stable features (2017) Ullah, M., Ullah, H., Alseadonn, I.M.: Human action recognition in videos using stable features (2017)
16.
Zurück zum Zitat Lin, T.-Y., et al. Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017) Lin, T.-Y., et al. Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)
17.
Zurück zum Zitat Ullah, H., Uzair, M., Ullah, M., Khan, A., Ahmad, A., Khan, W.: Density independent hydrodynamics model for crowd coherency detection. Neurocomputing 242, 28–39 (2017)CrossRef Ullah, H., Uzair, M., Ullah, M., Khan, A., Ahmad, A., Khan, W.: Density independent hydrodynamics model for crowd coherency detection. Neurocomputing 242, 28–39 (2017)CrossRef
18.
Zurück zum Zitat Ullah, M., Mohammed, A., Alaya Cheikh, F.: PedNet: a spatio-temporal deep convolutional neural network for pedestrian segmentation. J. Imaging 4(9), 107 (2018)CrossRef Ullah, M., Mohammed, A., Alaya Cheikh, F.: PedNet: a spatio-temporal deep convolutional neural network for pedestrian segmentation. J. Imaging 4(9), 107 (2018)CrossRef
19.
Zurück zum Zitat Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015) Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015)
20.
Zurück zum Zitat Khan, S., et al.: Dimension invariant model for human head detection. In: 2019 8th European Workshop on Visual Information Processing (EUVIP), pp. 99–104. IEEE (2019) Khan, S., et al.: Dimension invariant model for human head detection. In: 2019 8th European Workshop on Visual Information Processing (EUVIP), pp. 99–104. IEEE (2019)
21.
Zurück zum Zitat Wei, Y., Sun, X., Yang, K., Rui, Y., Yao, H.: Hierarchical semantic image matching using cnn feature pyramid. Comput. Vis. Image Underst. 169, 40–51 (2018)CrossRef Wei, Y., Sun, X., Yang, K., Rui, Y., Yao, H.: Hierarchical semantic image matching using cnn feature pyramid. Comput. Vis. Image Underst. 169, 40–51 (2018)CrossRef
22.
Zurück zum Zitat Ullah, M., Ullah, H., Cheikh, F.A.: Single shot appearance model (SSAM) for multi-target tracking. Electron. Imaging 2019(7), 466-1 (2019)CrossRef Ullah, M., Ullah, H., Cheikh, F.A.: Single shot appearance model (SSAM) for multi-target tracking. Electron. Imaging 2019(7), 466-1 (2019)CrossRef
23.
Zurück zum Zitat Yamin, M.M., Katt, B.: Modeling attack and defense scenarios for cyber security exercises. In: 5th Interdisciplinary Cyber Research Conference 2019, p. 7 (2019) Yamin, M.M., Katt, B.: Modeling attack and defense scenarios for cyber security exercises. In: 5th Interdisciplinary Cyber Research Conference 2019, p. 7 (2019)
25.
Zurück zum Zitat Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017) CrossRef Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017) CrossRef
26.
Zurück zum Zitat Ullah, H., et al.: Two stream model for crowd video classification. In: 2019 8th European Workshop on Visual Information Processing (EUVIP), pp. 93–98. IEEE (2019) Ullah, H., et al.: Two stream model for crowd video classification. In: 2019 8th European Workshop on Visual Information Processing (EUVIP), pp. 93–98. IEEE (2019)
27.
Zurück zum Zitat Cao, Z., Hidalgo, G., Simon, T., Wei, S.-E., Sheikh, Y.: OpenPose: realtime multi-person 2D pose estimation using part affinity fields. arXiv preprint arXiv:1812.08008 (2018) Cao, Z., Hidalgo, G., Simon, T., Wei, S.-E., Sheikh, Y.: OpenPose: realtime multi-person 2D pose estimation using part affinity fields. arXiv preprint arXiv:​1812.​08008 (2018)
28.
Zurück zum Zitat Ullah, M., Cheikh, F.A., Imran, A.S.: Hog based real-time multi-target tracking in Bayesian framework. In: 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 416–422. IEEE (2016) Ullah, M., Cheikh, F.A., Imran, A.S.: Hog based real-time multi-target tracking in Bayesian framework. In: 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 416–422. IEEE (2016)
29.
30.
Zurück zum Zitat Sun, K., Xiao, B., Liu, D., Wang, J.: Deep high-resolution representation learning for human pose estimation. arXiv preprint arXiv:1902.09212 (2019) Sun, K., Xiao, B., Liu, D., Wang, J.: Deep high-resolution representation learning for human pose estimation. arXiv preprint arXiv:​1902.​09212 (2019)
31.
Zurück zum Zitat Ullah, M., Cheikh, F.A.: A directed sparse graphical model for multi-target tracking. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1816–1823 (2018) Ullah, M., Cheikh, F.A.: A directed sparse graphical model for multi-target tracking. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1816–1823 (2018)
32.
Zurück zum Zitat Fang, H.-S., Xie, S., Tai, Y.-W., Lu, C.: RMPE: regional multi-person pose estimation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2334–2343 (2017) Fang, H.-S., Xie, S., Tai, Y.-W., Lu, C.: RMPE: regional multi-person pose estimation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2334–2343 (2017)
33.
Zurück zum Zitat Ullah, M., Mohammed, A.K., Cheikh, F.A., Wang, Z.: A hierarchical feature model for multi-target tracking. In: 2017 IEEE International Conference on Image Processing (ICIP), pp. 2612–2616. IEEE (2017) Ullah, M., Mohammed, A.K., Cheikh, F.A., Wang, Z.: A hierarchical feature model for multi-target tracking. In: 2017 IEEE International Conference on Image Processing (ICIP), pp. 2612–2616. IEEE (2017)
34.
Zurück zum Zitat Mathis, A., Yüksekgönül, M., Rogers, B., Bethge, M., Mathis, M.W.: Pretraining boosts out-of-domain robustness for pose estimation (2019) Mathis, A., Yüksekgönül, M., Rogers, B., Bethge, M., Mathis, M.W.: Pretraining boosts out-of-domain robustness for pose estimation (2019)
35.
Zurück zum Zitat Ullah, M., Kedir, M.A., Cheikh, F.A.: Hand-crafted vs deep features: a quantitative study of pedestrian appearance model. In: 2018 Colour and Visual Computing Symposium (CVCS), pp. 1–6. IEEE (2018) Ullah, M., Kedir, M.A., Cheikh, F.A.: Hand-crafted vs deep features: a quantitative study of pedestrian appearance model. In: 2018 Colour and Visual Computing Symposium (CVCS), pp. 1–6. IEEE (2018)
36.
37.
Zurück zum Zitat Ullah, M., Cheikh, F.A.: Deep feature based end-to-end transportation network for multi-target tracking. In: IEEE International Conference on Image Processing (ICIP), pp. 3738–3742 (2018) Ullah, M., Cheikh, F.A.: Deep feature based end-to-end transportation network for multi-target tracking. In: IEEE International Conference on Image Processing (ICIP), pp. 3738–3742 (2018)
38.
Zurück zum Zitat Nasirahmadi, A., Edwards, S.A., Sturm, B.: Implementation of machine vision for detecting behaviour of cattle and pigs. Livestock Sci. 202, 25–38 (2017)CrossRef Nasirahmadi, A., Edwards, S.A., Sturm, B.: Implementation of machine vision for detecting behaviour of cattle and pigs. Livestock Sci. 202, 25–38 (2017)CrossRef
39.
Zurück zum Zitat Kanwal, S., et al.: An image based prediction model for sleep stage identification. In: 2019 IEEE International Conference on Image Processing (ICIP), pp. 1366–1370. IEEE (2019) Kanwal, S., et al.: An image based prediction model for sleep stage identification. In: 2019 IEEE International Conference on Image Processing (ICIP), pp. 1366–1370. IEEE (2019)
40.
Zurück zum Zitat Atlan, L.S., Margulies, S.S.: Frequency-dependent changes in resting state electroencephalogram functional networks after traumatic brain injury in piglets. J. Neurotrauma 36, 2558–2578 (2019)CrossRef Atlan, L.S., Margulies, S.S.: Frequency-dependent changes in resting state electroencephalogram functional networks after traumatic brain injury in piglets. J. Neurotrauma 36, 2558–2578 (2019)CrossRef
41.
Zurück zum Zitat da Cordeiro, A.F.S., et al.: Use of vocalisation to identify sex, age, and distress in pig production. Biosyst. Eng. 173, 57–63 (2018)CrossRef da Cordeiro, A.F.S., et al.: Use of vocalisation to identify sex, age, and distress in pig production. Biosyst. Eng. 173, 57–63 (2018)CrossRef
42.
Zurück zum Zitat Psota, E.T., Mittek, M., Pérez, L.C., Schmidt, T., Mote, B.: Multi-pig part detection and association with a fully-convolutional network. Sensors 19(4), 852 (2019)CrossRef Psota, E.T., Mittek, M., Pérez, L.C., Schmidt, T., Mote, B.: Multi-pig part detection and association with a fully-convolutional network. Sensors 19(4), 852 (2019)CrossRef
43.
Zurück zum Zitat Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
Metadaten
Titel
A Bottom-Up Approach for Pig Skeleton Extraction Using RGB Data
verfasst von
Akif Quddus Khan
Salman Khan
Mohib Ullah
Faouzi Alaya Cheikh
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-51935-3_6

Premium Partner