Skip to main content

2020 | OriginalPaper | Buchkapitel

Farm Area Segmentation in Satellite Images Using DeepLabv3+ Neural Networks

verfasst von : Sara Sharifzadeh, Jagati Tata, Hilda Sharifzadeh, Bo Tan

Erschienen in: Data Management Technologies and Applications

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Farm detection using low resolution satellite images is an important part of digital agriculture applications such as crop yield monitoring. However, it has not received enough attention compared to high-resolution images. Although high resolution images are more efficient for detection of land cover components, the analysis of low-resolution images are yet important due to the low-resolution repositories of the past satellite images used for timeseries analysis, free availability and economic concerns. In this paper, semantic segmentation of farm areas is addressed using low resolution satellite images. The segmentation is performed in two stages; First, local patches or Regions of Interest (ROI) that include farm areas are detected. Next, deep semantic segmentation strategies are employed to detect the farm pixels. For patch classification, two previously developed local patch classification strategies are employed; a two-step semi-supervised methodology using hand-crafted features and Support Vector Machine (SVM) modelling and transfer learning using the pretrained Convolutional Neural Networks (CNNs). For the latter, the high-level features learnt from the massive filter banks of deep Visual Geometry Group Network (VGG-16) are utilized. After classifying the image patches that contain farm areas, the DeepLabv3+ model is used for semantic segmentation of farm pixels. Four different pretrained networks, resnet18, resnet50, resnet101 and mobilenetv2, are used to transfer their learnt features for the new farm segmentation problem. The first step results show the superiority of the transfer learning compared to hand-crafted features for classification of patches. The second step results show that the model trained based on resnet50 achieved the highest semantic segmentation accuracy.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Van Weyenberg, S., Thysen, I., Madsen, C., Vangeyte, J.: ICT-AGRI Country Report (2010) Van Weyenberg, S., Thysen, I., Madsen, C., Vangeyte, J.: ICT-AGRI Country Report (2010)
2.
Zurück zum Zitat Schmedtmann, J., Campagnolo, M.L.: Reliable crop identification with satellite imagery in the context of Common Agriculture Policy subsidy control. Remote Sens. 7(7), 9325–9346 (2015)CrossRef Schmedtmann, J., Campagnolo, M.L.: Reliable crop identification with satellite imagery in the context of Common Agriculture Policy subsidy control. Remote Sens. 7(7), 9325–9346 (2015)CrossRef
3.
Zurück zum Zitat Leslie, C.R., Serbina, L.O., Miller, H.M.: Landsat and Agriculture—Case Studies on the Uses and Benefits of Landsat Imagery in Agricultural Monitoring and Production: U.S. Geological Survey Open-File Report, p. 27 (2017) Leslie, C.R., Serbina, L.O., Miller, H.M.: Landsat and Agriculture—Case Studies on the Uses and Benefits of Landsat Imagery in Agricultural Monitoring and Production: U.S. Geological Survey Open-File Report, p. 27 (2017)
4.
Zurück zum Zitat Vorobiova, N.S.: Crops identification by using satellite images and algorithm for calculating estimates. In: CEUR Workshop Proceedings, pp. 419–427 (2016) Vorobiova, N.S.: Crops identification by using satellite images and algorithm for calculating estimates. In: CEUR Workshop Proceedings, pp. 419–427 (2016)
5.
Zurück zum Zitat Canty, M.J., Nielsen, A.A.: Visualization and unsupervised classification of changes in multispectral satellite imagery. Int. J. Remote Sens. 27, 3961–3975 (2006)CrossRef Canty, M.J., Nielsen, A.A.: Visualization and unsupervised classification of changes in multispectral satellite imagery. Int. J. Remote Sens. 27, 3961–3975 (2006)CrossRef
6.
Zurück zum Zitat Tian, J., Cui, S., Reinartz, P.: Building change detection based on satellite stereo imagery and digital surface models. IEEE Trans. Geosci. Remote Sens. 52, 406–417 (2014)CrossRef Tian, J., Cui, S., Reinartz, P.: Building change detection based on satellite stereo imagery and digital surface models. IEEE Trans. Geosci. Remote Sens. 52, 406–417 (2014)CrossRef
8.
Zurück zum Zitat Fisher, J.R.B., Acosta, E.A., Dennedy-Frank, P.J., Kroeger, T., Boucher, T.M.: Impact of satellite imagery spatial resolution on land use classification accuracy and modeled water quality. Remote Sens. Ecol. Conserv. 4, 137–149 (2018)CrossRef Fisher, J.R.B., Acosta, E.A., Dennedy-Frank, P.J., Kroeger, T., Boucher, T.M.: Impact of satellite imagery spatial resolution on land use classification accuracy and modeled water quality. Remote Sens. Ecol. Conserv. 4, 137–149 (2018)CrossRef
9.
Zurück zum Zitat Lee, L.W., Francisco, S.: Perceptual information processing system, U.S. Patent 10 618 543 (2004) Lee, L.W., Francisco, S.: Perceptual information processing system, U.S. Patent 10 618 543 (2004)
10.
Zurück zum Zitat Hossain, M.D., Chen, D.: Segmentation for Object-Based Image Analysis (OBIA): a review of algorithms and challenges from remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 150, 115–134 (2019)CrossRef Hossain, M.D., Chen, D.: Segmentation for Object-Based Image Analysis (OBIA): a review of algorithms and challenges from remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 150, 115–134 (2019)CrossRef
11.
Zurück zum Zitat Blaschke, T.: Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 65, 2–16 (2010)CrossRef Blaschke, T.: Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 65, 2–16 (2010)CrossRef
12.
Zurück zum Zitat Paola, J.D., Schowengerdt, R.A.: The effect of neural-network structure on a classification. Am. Soc. Photogramm. Remote Sens. 63, 535–544 (1997) Paola, J.D., Schowengerdt, R.A.: The effect of neural-network structure on a classification. Am. Soc. Photogramm. Remote Sens. 63, 535–544 (1997)
13.
Zurück zum Zitat Hansen, M., Dubayah, R., Defries, R.: Classification trees: an alternative to traditional land cover classifiers. Int. J. Remote Sens. 17(5), 1075–1081 (1996)CrossRef Hansen, M., Dubayah, R., Defries, R.: Classification trees: an alternative to traditional land cover classifiers. Int. J. Remote Sens. 17(5), 1075–1081 (1996)CrossRef
14.
Zurück zum Zitat Hardin, P.J.: Parametric and nearest-neighbor methods for hybrid classification: a comparison of pixel assignment accuracy. Photogramtnetric Eng. Remote Sens. 60(12), 1439–1448 (1994) Hardin, P.J.: Parametric and nearest-neighbor methods for hybrid classification: a comparison of pixel assignment accuracy. Photogramtnetric Eng. Remote Sens. 60(12), 1439–1448 (1994)
15.
Zurück zum Zitat Foody, G.M., Cox, D.P.: Sub-pixel land cover composition estimation using a linear mixture model and fuzzy membership functions. Int. J. Remote Sens. 15(3), 619–631 (1994)CrossRef Foody, G.M., Cox, D.P.: Sub-pixel land cover composition estimation using a linear mixture model and fuzzy membership functions. Int. J. Remote Sens. 15(3), 619–631 (1994)CrossRef
16.
Zurück zum Zitat Ryherd, S., Woodcock, C.: Combining spectral and texture data in the segmentation of remotely sensed images. Photogramm. Eng. Remote Sens. 62(2), 181–194 (1996) Ryherd, S., Woodcock, C.: Combining spectral and texture data in the segmentation of remotely sensed images. Photogramm. Eng. Remote Sens. 62(2), 181–194 (1996)
17.
Zurück zum Zitat Stuckens, J., Coppin, P.R., Bauer, M.E.: Integrating contextual information with per-pixel classification for improved land cover classification. Rem. Sens. Environ. 71(3), 282–296 (2000)CrossRef Stuckens, J., Coppin, P.R., Bauer, M.E.: Integrating contextual information with per-pixel classification for improved land cover classification. Rem. Sens. Environ. 71(3), 282–296 (2000)CrossRef
19.
Zurück zum Zitat Mountrakis, G., Im, J., Ogole, C.: Support vector machines in remote sensing: a review. ISPRS J. Photogramm. Remote Sens. 66, 247–259 (2011)CrossRef Mountrakis, G., Im, J., Ogole, C.: Support vector machines in remote sensing: a review. ISPRS J. Photogramm. Remote Sens. 66, 247–259 (2011)CrossRef
20.
Zurück zum Zitat Su, T., Zhang, S.: Local and global evaluation for remote sensing image segmentation. ISPRS J. Photogramm. Remote Sens. 130, 256–276 (2017)CrossRef Su, T., Zhang, S.: Local and global evaluation for remote sensing image segmentation. ISPRS J. Photogramm. Remote Sens. 130, 256–276 (2017)CrossRef
21.
Zurück zum Zitat Juniati, E., Arrofiqoh, E.N.: Comparison of pixel-based and object-based classification using parameters and non-parameters approach for the pattern consistency of multi scale landcover. In: ISPRS Archives, pp. 765–771. International Society for Photogrammetry and Remote Sensing (2017) Juniati, E., Arrofiqoh, E.N.: Comparison of pixel-based and object-based classification using parameters and non-parameters approach for the pattern consistency of multi scale landcover. In: ISPRS Archives, pp. 765–771. International Society for Photogrammetry and Remote Sensing (2017)
22.
Zurück zum Zitat Lu, D., Weng, Q.: A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 28(5), 823–870 (2007)CrossRef Lu, D., Weng, Q.: A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 28(5), 823–870 (2007)CrossRef
23.
Zurück zum Zitat Zhang, L., Yang, K.: Region-of-interest extraction based on frequency domain analysis and salient region detection for remote sensing image. IEEE Geosci. Remote Sens. Lett. 11, 916–920 (2014)CrossRef Zhang, L., Yang, K.: Region-of-interest extraction based on frequency domain analysis and salient region detection for remote sensing image. IEEE Geosci. Remote Sens. Lett. 11, 916–920 (2014)CrossRef
24.
Zurück zum Zitat Zhang, L., Li, A., Zhang, Z., Yang, K.: Global and local saliency analysis for the extraction of residential areas in high-spatial-resolution remote sensing image. IEEE Trans. Geosci. Remote Sens. 54, 3750–3763 (2016)CrossRef Zhang, L., Li, A., Zhang, Z., Yang, K.: Global and local saliency analysis for the extraction of residential areas in high-spatial-resolution remote sensing image. IEEE Trans. Geosci. Remote Sens. 54, 3750–3763 (2016)CrossRef
25.
Zurück zum Zitat Han, J., Zhang, D., Cheng, G., Guo, L., Ren, J.: Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning. IEEE Trans. Geosci. Remote Sens. 53, 3325–3337 (2015)CrossRef Han, J., Zhang, D., Cheng, G., Guo, L., Ren, J.: Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning. IEEE Trans. Geosci. Remote Sens. 53, 3325–3337 (2015)CrossRef
27.
Zurück zum Zitat Muhammad, U., Wang, W., Chattha, S.P., Ali, S.: Pre-trained VGGNet architecture for remote-sensing image scene classification. In: Proceedings - International Conference on Pattern Recognition, August 2018, pp. 1622–1627 (2018) Muhammad, U., Wang, W., Chattha, S.P., Ali, S.: Pre-trained VGGNet architecture for remote-sensing image scene classification. In: Proceedings - International Conference on Pattern Recognition, August 2018, pp. 1622–1627 (2018)
28.
Zurück zum Zitat Albert, A., Kaur, J., Gonzalez, M.C.: Using convolutional networks and satellite imagery to identify patterns in urban environments at a large scale (2017) Albert, A., Kaur, J., Gonzalez, M.C.: Using convolutional networks and satellite imagery to identify patterns in urban environments at a large scale (2017)
30.
Zurück zum Zitat Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440 (2015) Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440 (2015)
31.
Zurück zum Zitat Jégou, S., Drozdzal, M., Vazquez, D., Romero, A., Bengio, Y.: The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation. In: IEEE Computer Vision and Pattern Recognition Workshops, pp. 11–19 (2017) Jégou, S., Drozdzal, M., Vazquez, D., Romero, A., Bengio, Y.: The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation. In: IEEE Computer Vision and Pattern Recognition Workshops, pp. 11–19 (2017)
32.
Zurück zum Zitat Chen, L.C., Yang, Y., Wang, J., Xu, W., Yuille, A.L.: Attention to scale: scale-aware semantic image segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3640–3649 (2016) Chen, L.C., Yang, Y., Wang, J., Xu, W., Yuille, A.L.: Attention to scale: scale-aware semantic image segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3640–3649 (2016)
33.
Zurück zum Zitat Wei, Y., Feng, J., Liang, X., Cheng, M.M., Zhao, Y., Yan, S.: Object region mining with adversarial erasing: a simple classification to semantic segmentation approach. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1568–1576 (2017) Wei, Y., Feng, J., Liang, X., Cheng, M.M., Zhao, Y., Yan, S.: Object region mining with adversarial erasing: a simple classification to semantic segmentation approach. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1568–1576 (2017)
36.
Zurück zum Zitat Culurciello, A.C.: LinkNet: exploiting encoder representations for efficient semantic segmentation. In: IEEE Visual Communications and Image Processing (VCIP) (2017) Culurciello, A.C.: LinkNet: exploiting encoder representations for efficient semantic segmentation. In: IEEE Visual Communications and Image Processing (VCIP) (2017)
38.
Zurück zum Zitat Chen, L.-C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation (2017). arXiv:1706.05587 Chen, L.-C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation (2017). arXiv:​1706.​05587
39.
Zurück zum Zitat Chen, L., Zhu, Y., Papandreou, G., Schroff, F.: Encoder-decoder with atrous separable convolution for semantic image segmentation Chen, L., Zhu, Y., Papandreou, G., Schroff, F.: Encoder-decoder with atrous separable convolution for semantic image segmentation
40.
Zurück zum Zitat Sharifzadeh, S., Tata, J., Tan, B.: Farm detection based on deep convolutional neural nets and semi- supervised green texture detection using VIS-NIR satellite image important topic in digital agriculture domain. In: Data2019, pp. 100–108 (2019) Sharifzadeh, S., Tata, J., Tan, B.: Farm detection based on deep convolutional neural nets and semi- supervised green texture detection using VIS-NIR satellite image important topic in digital agriculture domain. In: Data2019, pp. 100–108 (2019)
41.
Zurück zum Zitat Bouvrie, J., Ezzat, T., Poggio, T.: Localized spectro-temporal cepstral analysis of speech. In: Proceedings of International Conference on Acoustics, Speech and Signal Processing, pp. 4733–4736 (2008) Bouvrie, J., Ezzat, T., Poggio, T.: Localized spectro-temporal cepstral analysis of speech. In: Proceedings of International Conference on Acoustics, Speech and Signal Processing, pp. 4733–4736 (2008)
42.
Zurück zum Zitat Sharifzadeh, S., Skytte, J.L., Clemmensen, L.H., Ersboll, B.K.: DCT-based characterization of milk products using diffuse reflectance images. In: 2013 18th International Conference on Digital Signal Processing, DSP 2013 (2013) Sharifzadeh, S., Skytte, J.L., Clemmensen, L.H., Ersboll, B.K.: DCT-based characterization of milk products using diffuse reflectance images. In: 2013 18th International Conference on Digital Signal Processing, DSP 2013 (2013)
43.
Zurück zum Zitat Sharifzadeh, S., Serrano, J., Carrabina, J.: Spectro-temporal analysis of speech for Spanish phoneme recognition. In: 2012 19th International Conference on Systems, Signals and Image Processing, IWSSIP 2012 (2012) Sharifzadeh, S., Serrano, J., Carrabina, J.: Spectro-temporal analysis of speech for Spanish phoneme recognition. In: 2012 19th International Conference on Systems, Signals and Image Processing, IWSSIP 2012 (2012)
45.
Zurück zum Zitat Ali, A.: Comparison of Strengths and Weaknesses of NDVI and Landscape-Ecological Mapping Techniques for Developing an Integrated Land Use Mapping Approach. A case study of the Mekong delta, Vietnam (2009) Ali, A.: Comparison of Strengths and Weaknesses of NDVI and Landscape-Ecological Mapping Techniques for Developing an Integrated Land Use Mapping Approach. A case study of the Mekong delta, Vietnam (2009)
46.
Zurück zum Zitat Ji, L., Zhang, L., Wylie, B.K., Rover, J.: On the terminology of the spectral vegetation index (NIR − SWIR)/(NIR + SWIR). Int. J. Remote Sens. 32, 6901–6909 (2011)CrossRef Ji, L., Zhang, L., Wylie, B.K., Rover, J.: On the terminology of the spectral vegetation index (NIR − SWIR)/(NIR + SWIR). Int. J. Remote Sens. 32, 6901–6909 (2011)CrossRef
47.
Zurück zum Zitat Li, B., Ti, C., Zhao, Y., Yan, X.: Estimating soil moisture with Landsat data and its application in extracting the spatial distribution of winter flooded paddies. Remote Sens. 8, 38 (2016)CrossRef Li, B., Ti, C., Zhao, Y., Yan, X.: Estimating soil moisture with Landsat data and its application in extracting the spatial distribution of winter flooded paddies. Remote Sens. 8, 38 (2016)CrossRef
48.
Zurück zum Zitat Gao, B.: NDWI—a normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 266, 257–266 (1996)CrossRef Gao, B.: NDWI—a normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 266, 257–266 (1996)CrossRef
49.
Zurück zum Zitat Tuceryan, M.: Moment based texture segmentation. In: Proceedings - International Conference on Pattern Recognition, pp. 45–48. Institute of Electrical and Electronics Engineers Inc. (1992) Tuceryan, M.: Moment based texture segmentation. In: Proceedings - International Conference on Pattern Recognition, pp. 45–48. Institute of Electrical and Electronics Engineers Inc. (1992)
51.
Zurück zum Zitat Haralick, R.M., Dinstein, I., Shanmugam, K.: Textural features for image classification. IEEE Trans. Syst. Man Cybern. 6, 610–621 (1973)CrossRef Haralick, R.M., Dinstein, I., Shanmugam, K.: Textural features for image classification. IEEE Trans. Syst. Man Cybern. 6, 610–621 (1973)CrossRef
52.
Zurück zum Zitat Chang, C., Lin, C.: LIBSVM: a library for support vector machines. ACM Trans. Intel. Syst. Technol. (TIST). 2, 1–39 (2011)CrossRef Chang, C., Lin, C.: LIBSVM: a library for support vector machines. ACM Trans. Intel. Syst. Technol. (TIST). 2, 1–39 (2011)CrossRef
54.
Zurück zum Zitat Petropoulos, G.P., Kalaitzidis, C., Prasad Vadrevu, K.: Support vector machines and object-based classification for obtaining land-use/cover cartography from Hyperion hyperspectral imagery. Comput. Geosci. 41, 99–107 (2012)CrossRef Petropoulos, G.P., Kalaitzidis, C., Prasad Vadrevu, K.: Support vector machines and object-based classification for obtaining land-use/cover cartography from Hyperion hyperspectral imagery. Comput. Geosci. 41, 99–107 (2012)CrossRef
55.
Zurück zum Zitat Li, E., Xia, J., Du, P., Lin, C., Samat, A.: Integrating multilayer features of convolutional neural networks for remote sensing scene classification. IEEE Trans. Geosci. Remote Sens. 55(10), 5653–5665 (2017)CrossRef Li, E., Xia, J., Du, P., Lin, C., Samat, A.: Integrating multilayer features of convolutional neural networks for remote sensing scene classification. IEEE Trans. Geosci. Remote Sens. 55(10), 5653–5665 (2017)CrossRef
56.
Zurück zum Zitat Chaib, S., Liu, H., Gu, Y., Yao, H.: Deep feature fusion for VHR remote sensing scene classification. IEEE Trans. Geosci. Remote Sens. 55, 4775–4784 (2017)CrossRef Chaib, S., Liu, H., Gu, Y., Yao, H.: Deep feature fusion for VHR remote sensing scene classification. IEEE Trans. Geosci. Remote Sens. 55, 4775–4784 (2017)CrossRef
58.
Zurück zum Zitat Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs (2016). arXiv:1606.00915 Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs (2016). arXiv:​1606.​00915
59.
Zurück zum Zitat Zheng, S., et al.: Conditional random fields as recurrent neural networks. In: ICCV (2015) Zheng, S., et al.: Conditional random fields as recurrent neural networks. In: ICCV (2015)
61.
62.
Metadaten
Titel
Farm Area Segmentation in Satellite Images Using DeepLabv3+ Neural Networks
verfasst von
Sara Sharifzadeh
Jagati Tata
Hilda Sharifzadeh
Bo Tan
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-54595-6_7