Skip to main content
Log in

Camera-trap images segmentation using multi-layer robust principal component analysis

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

The segmentation of animals from camera-trap images is a difficult task. To illustrate, there are various challenges due to environmental conditions and hardware limitation in these images. We proposed a multi-layer robust principal component analysis (multi-layer RPCA) approach for background subtraction. Our method computes sparse and low-rank images from a weighted sum of descriptors, using color and texture features as case of study for camera-trap images segmentation. The segmentation algorithm is composed of histogram equalization or Gaussian filtering as pre-processing, and morphological filters with active contour as post-processing. The parameters of our multi-layer RPCA were optimized with an exhaustive search. The database consists of camera-trap images from the Colombian forest taken by the Instituto de Investigación de Recursos Biológicos Alexander von Humboldt. We analyzed the performance of our method in inherent and therefore challenging situations of camera-trap images. Furthermore, we compared our method with some state-of-the-art algorithms of background subtraction, where our multi-layer RPCA outperformed these other methods. Our multi-layer RPCA reached 76.17 and 69.97% of average fine-grained F-measure for color and infrared sequences, respectively. To our best knowledge, this paper is the first work proposing multi-layer RPCA and using it for camera-trap images segmentation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Aybat, N.S., Goldfarb, D., Iyengar, G.: Fast first-order methods for stable principal component pursuit. arXiv preprint arXiv:1105.2126 (2011)

  2. Bouwmans, T.: Traditional and recent approaches in background modeling for foreground detection: an overview. Comput. Sci. Rev. 11, 31–66 (2014)

    Article  MATH  Google Scholar 

  3. Bouwmans, T., Sobral, A., Javed, S., Jung, S.K., Zahzah, E.H.: Decomposition into low-rank plus additive matrices for background/foreground separation: a review for a comparative evaluation with a large-scale dataset. Comput. Sci. Rev. 23, 1 (2016)

    Article  MATH  Google Scholar 

  4. Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? JACM 58(3), 11 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  5. Caselles, V., Kimmel, R., Sapiro, G.: Geodesic active contours. Int. J. Comput. Vis. 22(1), 61–79 (1997)

    Article  MATH  Google Scholar 

  6. Diaz-Pulido, A., Payan, E.: Densidad de ocelotes (leopardus pardalis) en los llanos colombianos. Mastozool. Neotrop. 18(1), 63–71 (2011)

    Google Scholar 

  7. Ebadi, S.E., Ones, V.G., Izquierdo, E.: Approximated robust principal component analysis for improved general scene background subtraction. arXiv preprint arXiv:1603.05875 (2016)

  8. Fegraus, E.H., Lin, K., Ahumada, J.A., Baru, C., Chandra, S., Youn, C.: Data acquisition and management software for camera trap data: a case study from the team network. Ecol. Inf. 6(6), 345–353 (2011)

    Article  Google Scholar 

  9. Giraldo-Zuluaga, J.H., Salazar, A., Gomez, A., Diaz-Pulido, A.: Multi-layer robust principal component analysis website. https://goo.gl/m59X6m (2017). Accessed 31 Dec 2017

  10. Goldfarb, D., Ma, S., Scheinberg, K.: Fast alternating linearization methods for minimizing the sum of two convex functions. Math. Program. 141(1–2), 349–382 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  11. Goyette, N., Jodoin, P.M., Porikli, F., Konrad, J., Ishwar, P.: Changedetection. net: a new change detection benchmark dataset. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1–8. IEEE (2012)

  12. He, J., Zhang, D., Balzano, L., Tao, T.: Iterative online subspace learning for robust image alignment. In:2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–8. IEEE (2013)

  13. Heikkila, M., Pietikainen, M.: A texture-based method for modeling the background and detecting moving objects. IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 657–662 (2006)

    Article  Google Scholar 

  14. Javed, S., Bouwmans, T., Jung, S.K.: Combining ARF and or-PCA for robust background subtraction of noisy videos. In: International Conference on Image Analysis and Processing, pp. 340–351. Springer (2015)

  15. Javed, S., Jung, S.K., Mahmood, A., Bouwmans, T.: Motion-aware graph regularized RPCA for background modeling of complex scenes. In: 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 120–125. IEEE (2016)

  16. Javed, S., Oh, S.H., Bouwmans, T., Jung, S.K.: Robust background subtraction to global illumination changes via multiple features-based online robust principal components analysis with Markov random field. J. Electron. Imaging 24(4), 043011–043011 (2015)

    Article  Google Scholar 

  17. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

  18. Lin, Z., Chen, M., Ma, Y.: The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055 (2010)

  19. Ganesh, A., Lin, Z., Wright, J., Wu, L., Chen, M., Ma, Y.: Fast algorithms for recovering a corrupted low-rank matrix. In: 2009 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), pp. 213–216 (2009)

  20. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)

  21. Maddalena, L., Petrosino, A.: A fuzzy spatial coherence-based approach to background/foreground separation for moving object detection. Neural Comput. Appl. 19(2), 179–186 (2010)

    Article  Google Scholar 

  22. Mahadevan, V., Vasconcelos, N.: Spatiotemporal saliency in dynamic scenes. IEEE Trans. Pattern Anal. Mach. Intell. 32(1), 171–177 (2010)

    Article  Google Scholar 

  23. O’Connell, A.F., Nichols, J.D., Karanth, K.U.: Camera Traps in Animal Ecology: Methods and Analyses. Springer, Berlin (2010)

    Google Scholar 

  24. Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002)

    Article  MATH  Google Scholar 

  25. Reddy, K.P.K., Aravind, R.: Segmentation of camera-trap tiger images based on texture and color features. In: 2012 National Conference on Communications (NCC), pp. 1–5. IEEE (2012)

  26. Ren, X., Han, T.X., He, Z.: Ensemble video object cut in highly dynamic scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1947–1954 (2013)

  27. Rodríguez, P., Wohlberg, B.: Translational and rotational jitter invariant incremental principal component pursuit for video background modeling. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 537–541. IEEE (2015)

  28. Rodríguez, P., Wohlberg, B.: Ghosting suppression for incremental principal component pursuit algorithms. In: 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 197–201. IEEE (2016)

  29. Sobral, A.: BGSLibrary: an opencv c++ background subtraction library. In: IX Workshop de Viso Computacional (WVC’2013), Rio de Janeiro, Brazil (2013). https://github.com/andrewssobral/bgslibrary

  30. Sobral, A., Bouwmans, T., ZahZah, E.h.: Double-constrained rpca based on saliency maps for foreground detection in automated maritime surveillance. In: 2015 12th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1–6. IEEE (2015)

  31. Sobral, A., Bouwmans, T., Zahzah, E.H.: Lrslibrary: Low-rank and sparse tools for background modeling and subtraction in videos. In: Bouwmans, T., Aybat, N., Zahzah, E. (eds.) Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing. CRC Press, Boca Raton (2015)

  32. Sobral, A., Vacavant, A.: A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput. Vis. Image Underst. 122, 4–21 (2014)

    Article  Google Scholar 

  33. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: Flexible background subtraction with self-balanced local sensitivity. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 408–413 (2014)

  34. Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1999, vol. 2, pp. 246–252. IEEE (1999)

  35. Swanson, A., Kosmala, M., Lintott, C., Simpson, R., Smith, A., Packer, C.: Snapshot serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African Savanna. Sci. Data 2, 150026 (2015)

    Article  Google Scholar 

  36. Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Wallflower: principles and practice of background maintenance. In: The Proceedings of the Seventh IEEE International Conference on Computer Vision, 1999, vol. 1, pp. 255–261. IEEE (1999)

  37. Vishwakarma, S., Agrawal, A.: A survey on activity recognition and behavior understanding in video surveillance. Vis. Comput. 29(10), 983–1009 (2013)

    Article  Google Scholar 

  38. Wang, Y., Jodoin, P.M., Porikli, F., Konrad, J., Benezeth, Y., Ishwar, P.: Cdnet 2014: an expanded change detection benchmark dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 387–394 (2014)

  39. Yao, J., Odobez, J.M.: Multi-layer background subtraction based on color and texture. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE (2007)

  40. Ye, X., Yang, J., Sun, X., Li, K., Hou, C., Wang, Y.: Foreground-background separation from video clips via motion-assisted matrix restoration. IEEE Trans. Circuits Syst. Video Technol. 25(11), 1721–1734 (2015)

    Article  Google Scholar 

  41. Zhang, Z., Han, T.X., He, Z.: Coupled ensemble graph cuts and object verification for animal segmentation from highly cluttered videos. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 2830–2834. IEEE (2015)

  42. Zhang, Z., He, Z., Cao, G., Cao, W.: Animal detection from highly cluttered natural scenes using spatiotemporal object region proposals and patch verification. IEEE Trans. Multimed. 18(10), 2079–2092 (2016)

    Article  Google Scholar 

  43. Zivkovic, Z.: Improved adaptive gaussian mixture model for background subtraction. In: Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, vol. 2, pp. 28–31. IEEE (2004)

Download references

Acknowledgements

This work was supported by the Colombian National Fund for Science, Technology and Innovation, Francisco José de Caldas - COLCIENCIAS (Colombia). Project No. 111571451061.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jhony-Heriberto Giraldo-Zuluaga.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Giraldo-Zuluaga, JH., Salazar, A., Gomez, A. et al. Camera-trap images segmentation using multi-layer robust principal component analysis. Vis Comput 35, 335–347 (2019). https://doi.org/10.1007/s00371-017-1463-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-017-1463-9

Keywords

Navigation