Skip to main content
Erschienen in: Arabian Journal for Science and Engineering 2/2023

21.03.2022 | Research Article-Computer Engineering and Computer Science

Semantic Segmentation of Satellite Images Using Deep-Unet

verfasst von: Ningthoujam Johny Singh, Kishorjit Nongmeikapam

Erschienen in: Arabian Journal for Science and Engineering | Ausgabe 2/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The ability to extract roads, detect buildings, and identify land cover types from satellite images is critical for sustainable development, agriculture, forestry, urban planning, and climate change research. Semantic segmentation with satellite images to extract vegetation covers and urban planning is essential for sustainable development and is a need for the hour. In this paper, Deep Unet, the modified version of Unet, is used for semantic segmentation with pre-processing of the image using FAAGKFCM and SLIC Superpixel to establish mapping for classifying different landfills based on satellite imagery. The research aims to train and test convolutional models for mapping land cover and testing the usability of land cover and identification of changes in land cover. Using mIoU and global accuracy as the evaluation metrics, the proposed model is compared with other methods, namely SegNet, UNet, DeepUNet. It is found that the proposed model outperforms other methods with mIoU of 89.51 and 90.6% global accuracy.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Noh, H.; Hong, S.; Han, B.: Learning deconvolution network for semantic segmentation. In: Proceedings of the IEEE international conference on computer vision, pp. 1520–1528 (2015) Noh, H.; Hong, S.; Han, B.: Learning deconvolution network for semantic segmentation. In: Proceedings of the IEEE international conference on computer vision, pp. 1520–1528 (2015)
2.
Zurück zum Zitat Baxt, W.G.: Use of an artificial neural network for the diagnosis of myocardial infarction. Ann. Intern. Med. 115(11), 843–848 (1991)CrossRef Baxt, W.G.: Use of an artificial neural network for the diagnosis of myocardial infarction. Ann. Intern. Med. 115(11), 843–848 (1991)CrossRef
3.
Zurück zum Zitat Krizhevsky, A.; Sutskever, I.; Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst., pp. 1097–1105 (2012) Krizhevsky, A.; Sutskever, I.; Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst., pp. 1097–1105 (2012)
4.
Zurück zum Zitat Simonyan, K.; Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014) Simonyan, K.; Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:​1409.​1556 (2014)
5.
Zurück zum Zitat Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9 (2015) Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9 (2015)
6.
Zurück zum Zitat Farabet, C.; Couprie, C.; Najman, L.; LeCun, Y.: Learning hierarchical features for scene labeling. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1915–1929 (2012)CrossRef Farabet, C.; Couprie, C.; Najman, L.; LeCun, Y.: Learning hierarchical features for scene labeling. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1915–1929 (2012)CrossRef
7.
Zurück zum Zitat Mostajabi, M.; Yadollahpour, P.; Shakhnarovich, G.: Feedforward semantic segmentation with zoom-out features. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3376–3385 (2015) Mostajabi, M.; Yadollahpour, P.; Shakhnarovich, G.: Feedforward semantic segmentation with zoom-out features. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3376–3385 (2015)
8.
Zurück zum Zitat Girshick, R.; Donahue, J.; Darrell, T.; Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587 (2014) Girshick, R.; Donahue, J.; Darrell, T.; Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587 (2014)
9.
Zurück zum Zitat Hariharan, B.; Arbeláez, P.; Girshick, R.; Malik, J.: Simultaneous detection and segmentation. In: European conference on computer vision, pp. 297–312. Springer (2014) Hariharan, B.; Arbeláez, P.; Girshick, R.; Malik, J.: Simultaneous detection and segmentation. In: European conference on computer vision, pp. 297–312. Springer (2014)
10.
Zurück zum Zitat Ji, S.; Xu, W.; Yang, M.; Yu, K.: 3d convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2012)CrossRef Ji, S.; Xu, W.; Yang, M.; Yu, K.: 3d convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2012)CrossRef
11.
Zurück zum Zitat Simonyan, K.; Zisserman, A.: Two-stream convolutional networks for action recognition in videos. Ad. Neural Inf. Process. Syst. 27, 568–576 (2014) Simonyan, K.; Zisserman, A.: Two-stream convolutional networks for action recognition in videos. Ad. Neural Inf. Process. Syst. 27, 568–576 (2014)
12.
Zurück zum Zitat Long, J.; Shelhamer, E.; Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440 (2015) Long, J.; Shelhamer, E.; Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440 (2015)
13.
Zurück zum Zitat Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L.: Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062 (2014) Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L.: Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:​1412.​7062 (2014)
15.
Zurück zum Zitat Krähenbühl, P.; Koltun, V.: Efficient inference in fully connected crfs with gaussian edge potentials. In: Advances in neural information processing systems, pp. 109–117 (2011) Krähenbühl, P.; Koltun, V.: Efficient inference in fully connected crfs with gaussian edge potentials. In: Advances in neural information processing systems, pp. 109–117 (2011)
16.
Zurück zum Zitat Bokusheva, R.; Kogan, F.; Vitkovskaya, I.; Conradt, S.; Batyrbayeva, M.: Satellite-based vegetation health indices as a criteria for insuring against drought-related yield losses. Agric. For. Meteorol. 220, 200–206 (2016)CrossRef Bokusheva, R.; Kogan, F.; Vitkovskaya, I.; Conradt, S.; Batyrbayeva, M.: Satellite-based vegetation health indices as a criteria for insuring against drought-related yield losses. Agric. For. Meteorol. 220, 200–206 (2016)CrossRef
17.
Zurück zum Zitat Petiteville, I.; Ward, S.; Dyke, G.; Steventon, M.; Harry, J.: Satellite earth observations in support of disaster risk reduction. In: The CEOS Earth Observation Handbook, Special 2015 Edition for the 3rd UN World Conference on Disaster Risk Reduction, pp. 10–30 (2015) Petiteville, I.; Ward, S.; Dyke, G.; Steventon, M.; Harry, J.: Satellite earth observations in support of disaster risk reduction. In: The CEOS Earth Observation Handbook, Special 2015 Edition for the 3rd UN World Conference on Disaster Risk Reduction, pp. 10–30 (2015)
18.
Zurück zum Zitat Everingham, M.; Eslami, S.A.; Van Gool, L.; Williams, C.K.; Winn, J.; Zisserman, A.: The pascal visual object classes challenge: a retrospective. Int. J. Comput. Vision 111(1), 98–136 (2015)CrossRef Everingham, M.; Eslami, S.A.; Van Gool, L.; Williams, C.K.; Winn, J.; Zisserman, A.: The pascal visual object classes challenge: a retrospective. Int. J. Comput. Vision 111(1), 98–136 (2015)CrossRef
19.
Zurück zum Zitat Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision, pp. 740–755. Springer (2014) Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision, pp. 740–755. Springer (2014)
20.
Zurück zum Zitat Audebert, N.; Le Saux, B.; Lefèvre, S.: Semantic segmentation of earth observation data using multimodal and multi-scale deep networks. In: Asian conference on computer vision, pp. 180–196. Springer (2016) Audebert, N.; Le Saux, B.; Lefèvre, S.: Semantic segmentation of earth observation data using multimodal and multi-scale deep networks. In: Asian conference on computer vision, pp. 180–196. Springer (2016)
21.
Zurück zum Zitat Pirotti, F.; Sunar, F.; Piragnolo, M.: Benchmark of machine learning methods for classification of a sentinel-2 image. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 41 (2016) Pirotti, F.; Sunar, F.; Piragnolo, M.: Benchmark of machine learning methods for classification of a sentinel-2 image. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 41 (2016)
22.
Zurück zum Zitat Hamida, A.B.; Benoit, A.; Lambert, P.; Ben-Amar, C.: Deep learning approach for remote sensing image analysis. pp. 200–206 (2016) Hamida, A.B.; Benoit, A.; Lambert, P.; Ben-Amar, C.: Deep learning approach for remote sensing image analysis. pp. 200–206 (2016)
23.
Zurück zum Zitat Audebert, N.; Le Saux, B.; Lefèvrey, S.: Fusion of heterogeneous data in convolutional networks for urban semantic labeling. In: 2017 Joint Urban Remote Sensing Event (JURSE), pp. 1–4. IEEE (2017) Audebert, N.; Le Saux, B.; Lefèvrey, S.: Fusion of heterogeneous data in convolutional networks for urban semantic labeling. In: 2017 Joint Urban Remote Sensing Event (JURSE), pp. 1–4. IEEE (2017)
24.
Zurück zum Zitat Volpi, M.; Tuia, D.: Dense semantic labeling of subdecimeter resolution images with convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 55(2), 881–893 (2016)CrossRef Volpi, M.; Tuia, D.: Dense semantic labeling of subdecimeter resolution images with convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 55(2), 881–893 (2016)CrossRef
25.
Zurück zum Zitat Badrinarayanan, V.; Kendall, A.; Cipolla, R.: Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017)CrossRef Badrinarayanan, V.; Kendall, A.; Cipolla, R.: Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017)CrossRef
26.
Zurück zum Zitat Storie, C.D.; Henry, C.J.: Deep learning neural networks for land use land cover mapping. In: IGARSS 2018-2018 IEEE International geoscience and remote sensing symposium, pp. 3445–3448. IEEE (2018) Storie, C.D.; Henry, C.J.: Deep learning neural networks for land use land cover mapping. In: IGARSS 2018-2018 IEEE International geoscience and remote sensing symposium, pp. 3445–3448. IEEE (2018)
27.
Zurück zum Zitat Nivaggioli, A.; Randrianarivo, H.: Weakly supervised semantic segmentation of satellite images. In: 2019 joint urban remote sensing event (JURSE), pp. 1–4. IEEE (2019) Nivaggioli, A.; Randrianarivo, H.: Weakly supervised semantic segmentation of satellite images. In: 2019 joint urban remote sensing event (JURSE), pp. 1–4. IEEE (2019)
28.
Zurück zum Zitat Wang, S.; Chen, W.; Xie, S.M.; Azzari, G.; Lobell, D.B.: Weakly supervised deep learning for segmentation of remote sensing imagery. Remote Sens. 12(2), 207 (2020)CrossRef Wang, S.; Chen, W.; Xie, S.M.; Azzari, G.; Lobell, D.B.: Weakly supervised deep learning for segmentation of remote sensing imagery. Remote Sens. 12(2), 207 (2020)CrossRef
29.
Zurück zum Zitat Ahn, J.; Kwak, S.: Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4981–4990 (2018) Ahn, J.; Kwak, S.: Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4981–4990 (2018)
30.
Zurück zum Zitat Lv, Z.; Liu, T.; Shi, C.; Benediktsson, J.A.; Du, H.: Novel land cover change detection method based on k-means clustering and adaptive majority voting using bitemporal remote sensing images. IEEE Access 7, 34425–34437 (2019)CrossRef Lv, Z.; Liu, T.; Shi, C.; Benediktsson, J.A.; Du, H.: Novel land cover change detection method based on k-means clustering and adaptive majority voting using bitemporal remote sensing images. IEEE Access 7, 34425–34437 (2019)CrossRef
31.
Zurück zum Zitat Zhan, Z.; Zhang, X.; Liu, Y.; Sun, X.; Pang, C.; Zhao, C.: Vegetation land use/land cover extraction from high-resolution satellite images based on adaptive context inference. IEEE Access 8, 21036–21051 (2020)CrossRef Zhan, Z.; Zhang, X.; Liu, Y.; Sun, X.; Pang, C.; Zhao, C.: Vegetation land use/land cover extraction from high-resolution satellite images based on adaptive context inference. IEEE Access 8, 21036–21051 (2020)CrossRef
32.
Zurück zum Zitat Buchhorn, M.; Lesiv, M.; Tsendbazar, N.E.; Herold, M.; Bertels, L.; Smets, B.: Copernicus global land cover layers–collection 2. Remote Sensing 12(6), 1044 (2020)CrossRef Buchhorn, M.; Lesiv, M.; Tsendbazar, N.E.; Herold, M.; Bertels, L.; Smets, B.: Copernicus global land cover layers–collection 2. Remote Sensing 12(6), 1044 (2020)CrossRef
33.
Zurück zum Zitat Scott, G.J.; England, M.R.; Starms, W.A.; Marcum, R.A.; Davis, C.H.: Training deep convolutional neural networks for land-cover classification of high-resolution imagery. IEEE Geosci. Remote Sens. Lett. 14(4), 549–553 (2017)CrossRef Scott, G.J.; England, M.R.; Starms, W.A.; Marcum, R.A.; Davis, C.H.: Training deep convolutional neural networks for land-cover classification of high-resolution imagery. IEEE Geosci. Remote Sens. Lett. 14(4), 549–553 (2017)CrossRef
34.
Zurück zum Zitat Benbahria, Z.; Smiej, M.; Sebari, I.; Hajji, H.: Land cover intelligent mapping using transfer learning and semantic segmentation. In: 2019 7th Mediterranean congress of telecommunications (CMT), pp. 1–5. IEEE (2019) Benbahria, Z.; Smiej, M.; Sebari, I.; Hajji, H.: Land cover intelligent mapping using transfer learning and semantic segmentation. In: 2019 7th Mediterranean congress of telecommunications (CMT), pp. 1–5. IEEE (2019)
35.
Zurück zum Zitat Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A.: Deep learning classification of land cover and crop types using remote sensing data. IEEE Geosci. Remote Sens. Lett. 14(5), 778–782 (2017)CrossRef Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A.: Deep learning classification of land cover and crop types using remote sensing data. IEEE Geosci. Remote Sens. Lett. 14(5), 778–782 (2017)CrossRef
36.
Zurück zum Zitat Gbodjo, Y.J.E.; Ienco, D.; Leroux, L.; Interdonato, R.; Gaetano, R.; Ndao, B.: Object-based multi-temporal and multi-source land cover mapping leveraging hierarchical class relationships. Remote Sensing 12(17), 2814 (2020)CrossRef Gbodjo, Y.J.E.; Ienco, D.; Leroux, L.; Interdonato, R.; Gaetano, R.; Ndao, B.: Object-based multi-temporal and multi-source land cover mapping leveraging hierarchical class relationships. Remote Sensing 12(17), 2814 (2020)CrossRef
37.
Zurück zum Zitat He, K.; Zhang, X.; Ren, S.; Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778 (2016) He, K.; Zhang, X.; Ren, S.; Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778 (2016)
38.
Zurück zum Zitat Jégou, S.; Drozdzal, M.; Vazquez, D.; Romero, A.; Bengio, Y.: The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 11–19 (2017) Jégou, S.; Drozdzal, M.; Vazquez, D.; Romero, A.; Bengio, Y.: The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 11–19 (2017)
39.
Zurück zum Zitat Barghout, L.; Lee, L.: Perceptual information processing system (2004). US Patent App. 10/618,543 Barghout, L.; Lee, L.: Perceptual information processing system (2004). US Patent App. 10/618,543
40.
Zurück zum Zitat Cheng, G.; Han, J.: A survey on object detection in optical remote sensing images. ISPRS J. Photogramm. Remote. Sens. 117, 11–28 (2016)CrossRef Cheng, G.; Han, J.: A survey on object detection in optical remote sensing images. ISPRS J. Photogramm. Remote. Sens. 117, 11–28 (2016)CrossRef
41.
Zurück zum Zitat Huertas, A.; Nevatia, R.: Detecting buildings in aerial images. Comput. Vis. Graph. Image Process. 41(2), 131–152 (1988)CrossRef Huertas, A.; Nevatia, R.: Detecting buildings in aerial images. Comput. Vis. Graph. Image Process. 41(2), 131–152 (1988)CrossRef
42.
Zurück zum Zitat Chen, L.C.; Yang, Y.; Wang, J.; Xu, W.; Yuille, A.L.: Attention to scale: Scale-aware semantic image segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3640–3649 (2016) Chen, L.C.; Yang, Y.; Wang, J.; Xu, W.; Yuille, A.L.: Attention to scale: Scale-aware semantic image segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3640–3649 (2016)
43.
Zurück zum Zitat Wei, Y.; Feng, J.; Liang, X.; Cheng, M.M.; Zhao, Y.; Yan, S.: Object region mining with adversarial erasing: a simple classification to semantic segmentation approach. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1568–1576 (2017) Wei, Y.; Feng, J.; Liang, X.; Cheng, M.M.; Zhao, Y.; Yan, S.: Object region mining with adversarial erasing: a simple classification to semantic segmentation approach. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1568–1576 (2017)
44.
Zurück zum Zitat You, H.; Tian, S.; Yu, L.; Lv, Y.: Pixel-level remote sensing image recognition based on bidirectional word vectors. IEEE Trans. Geosci. Remote Sens. 58(2), 1281–1293 (2019)CrossRef You, H.; Tian, S.; Yu, L.; Lv, Y.: Pixel-level remote sensing image recognition based on bidirectional word vectors. IEEE Trans. Geosci. Remote Sens. 58(2), 1281–1293 (2019)CrossRef
46.
Zurück zum Zitat Nongmeikapam, K.; Kumar, W.K.; Singh, A.D.: Fast and automatically adjustable grbf kernel based fuzzy c-means for cluster-wise coloured feature extraction and segmentation of mr images. IET Image Proc. 12(4), 513–524 (2017)CrossRef Nongmeikapam, K.; Kumar, W.K.; Singh, A.D.: Fast and automatically adjustable grbf kernel based fuzzy c-means for cluster-wise coloured feature extraction and segmentation of mr images. IET Image Proc. 12(4), 513–524 (2017)CrossRef
47.
Zurück zum Zitat Nongmeikapam, K.; Kumar, W.K.; Khumukcham, R.; Singh, A.D.: An unsupervised cluster-wise color segmentation of medical and camera images using genetically improved fuzzy-markovian decision relational model. J. Intell. Fuzzy Syst. 35(1), 1147–1160 (2018)CrossRef Nongmeikapam, K.; Kumar, W.K.; Khumukcham, R.; Singh, A.D.: An unsupervised cluster-wise color segmentation of medical and camera images using genetically improved fuzzy-markovian decision relational model. J. Intell. Fuzzy Syst. 35(1), 1147–1160 (2018)CrossRef
48.
Zurück zum Zitat Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S.: Slic superpixels. Tech. rep. (2010) Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S.: Slic superpixels. Tech. rep. (2010)
49.
Zurück zum Zitat Li, Y.; Sun, J.; Tang, C.K.; Shum, H.Y.: Lazy snapping. ACM Trans. Graph. 23(3), 303–308 (2004)CrossRef Li, Y.; Sun, J.; Tang, C.K.; Shum, H.Y.: Lazy snapping. ACM Trans. Graph. 23(3), 303–308 (2004)CrossRef
50.
Zurück zum Zitat He, X.; Zemel, R.S.; Ray, D.: Learning and incorporating top-down cues in image segmentation. In: European conference on computer vision, pp. 338–351. Springer (2006) He, X.; Zemel, R.S.; Ray, D.: Learning and incorporating top-down cues in image segmentation. In: European conference on computer vision, pp. 338–351. Springer (2006)
51.
Zurück zum Zitat Hoiem, D.; Efros, A.A.; Hebert, M.: Automatic photo pop-up. In: ACM SIGGRAPH 2005 Papers, pp. 577–584 (2005) Hoiem, D.; Efros, A.A.; Hebert, M.: Automatic photo pop-up. In: ACM SIGGRAPH 2005 Papers, pp. 577–584 (2005)
52.
Zurück zum Zitat Fulkerson, B.; Vedaldi, A.; Soatto, S.: Class segmentation and object localization with superpixel neighborhoods. In: 2009 IEEE 12th international conference on computer vision, pp. 670–677. IEEE (2009) Fulkerson, B.; Vedaldi, A.; Soatto, S.: Class segmentation and object localization with superpixel neighborhoods. In: 2009 IEEE 12th international conference on computer vision, pp. 670–677. IEEE (2009)
53.
Zurück zum Zitat Felzenszwalb, P.F.; Huttenlocher, D.P.: Efficient graph-based image segmentation. Int. J. Comput. Vision 59(2), 167–181 (2004)CrossRefMATH Felzenszwalb, P.F.; Huttenlocher, D.P.: Efficient graph-based image segmentation. Int. J. Comput. Vision 59(2), 167–181 (2004)CrossRefMATH
54.
Zurück zum Zitat Ren, X.; Malik, J.: Learning a classification model for segmentation. In: null, p. 10. IEEE (2003) Ren, X.; Malik, J.: Learning a classification model for segmentation. In: null, p. 10. IEEE (2003)
55.
Zurück zum Zitat Mori, G.: Guiding model search using segmentation. In: Tenth IEEE international conference on computer vision (ICCV’05) Volume 1, 2, 1417–1423 (2005) Mori, G.: Guiding model search using segmentation. In: Tenth IEEE international conference on computer vision (ICCV’05) Volume 1, 2, 1417–1423 (2005)
56.
Zurück zum Zitat Vedaldi, A.; Soatto, S.: Quick shift and kernel methods for mode seeking. In: European conference on computer vision, pp. 705–718. Springer (2008) Vedaldi, A.; Soatto, S.: Quick shift and kernel methods for mode seeking. In: European conference on computer vision, pp. 705–718. Springer (2008)
57.
Zurück zum Zitat Levinshtein, A.; Stere, A.; Kutulakos, K.N.; Fleet, D.J.; Dickinson, S.J.; Siddiqi, K.: Turbopixels: fast superpixels using geometric flows. IEEE Trans. Pattern Anal. Mach. Intell. 31(12), 2290–2297 (2009)CrossRef Levinshtein, A.; Stere, A.; Kutulakos, K.N.; Fleet, D.J.; Dickinson, S.J.; Siddiqi, K.: Turbopixels: fast superpixels using geometric flows. IEEE Trans. Pattern Anal. Mach. Intell. 31(12), 2290–2297 (2009)CrossRef
58.
Zurück zum Zitat Moore, A.P.; Prince, S.J.; Warrell, J.; Mohammed, U.; Jones, G.: Superpixel lattices. In: 2008 IEEE conference on computer vision and pattern recognition, pp. 1–8. IEEE (2008) Moore, A.P.; Prince, S.J.; Warrell, J.; Mohammed, U.; Jones, G.: Superpixel lattices. In: 2008 IEEE conference on computer vision and pattern recognition, pp. 1–8. IEEE (2008)
59.
Zurück zum Zitat Ronneberger, O.; Fischer, P.; Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Springer (2015) Ronneberger, O.; Fischer, P.; Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Springer (2015)
60.
Zurück zum Zitat Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al.: Recent advances in convolutional neural networks. Pattern Recogn. 77, 354–377 (2018)CrossRef Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al.: Recent advances in convolutional neural networks. Pattern Recogn. 77, 354–377 (2018)CrossRef
61.
Zurück zum Zitat Lee, C.Y.; Xie, S.; Gallagher, P.; Zhang, Z.; Tu, Z.: Deeply-supervised nets. In: Artificial intelligence and statistics, pp. 562–570 (2015) Lee, C.Y.; Xie, S.; Gallagher, P.; Zhang, Z.; Tu, Z.: Deeply-supervised nets. In: Artificial intelligence and statistics, pp. 562–570 (2015)
62.
Zurück zum Zitat Zeiler, M.D.; Fergus, R.: Visualizing and understanding convolutional networks. In: European conference on computer vision, pp. 818–833. Springer (2014) Zeiler, M.D.; Fergus, R.: Visualizing and understanding convolutional networks. In: European conference on computer vision, pp. 818–833. Springer (2014)
63.
Zurück zum Zitat Zeiler, M.D.; Taylor, G.W.; Fergus, R.: Adaptive deconvolutional networks for mid and high level feature learning. In: 2011 International Conference on Computer Vision, pp. 2018–2025. IEEE (2011) Zeiler, M.D.; Taylor, G.W.; Fergus, R.: Adaptive deconvolutional networks for mid and high level feature learning. In: 2011 International Conference on Computer Vision, pp. 2018–2025. IEEE (2011)
64.
Zurück zum Zitat Kemker, R., Salvaggio, C., Kanan, C.: High-resolution multispectral dataset for semantic segmentation. arXiv preprint arXiv:1703.01918 (2017) Kemker, R., Salvaggio, C., Kanan, C.: High-resolution multispectral dataset for semantic segmentation. arXiv preprint arXiv:​1703.​01918 (2017)
65.
Zurück zum Zitat Demir, I.; Koperski, K.; Lindenbaum, D.; Pang, G.; Huang, J.; Basu, S.; Hughes, F.; Tuia, D.; Raska, R.: Deepglobe 2018: a challenge to parse the earth through satellite images. In: 2018 IEEE/CVF Conference on computer vision and pattern recognition workshops (CVPRW), pp. 172–17209. IEEE (2018) Demir, I.; Koperski, K.; Lindenbaum, D.; Pang, G.; Huang, J.; Basu, S.; Hughes, F.; Tuia, D.; Raska, R.: Deepglobe 2018: a challenge to parse the earth through satellite images. In: 2018 IEEE/CVF Conference on computer vision and pattern recognition workshops (CVPRW), pp. 172–17209. IEEE (2018)
66.
Zurück zum Zitat Ioffe, S.; Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015) Ioffe, S.; Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:​1502.​03167 (2015)
67.
Zurück zum Zitat Beheshti, N.; Johnsson, L.: Squeeze u-net: A memory and energy efficient image segmentation network. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 364–365 (2020) Beheshti, N.; Johnsson, L.: Squeeze u-net: A memory and energy efficient image segmentation network. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 364–365 (2020)
68.
Zurück zum Zitat Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al.: Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018) Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al.: Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:​1804.​03999 (2018)
69.
Zurück zum Zitat Diakogiannis, F.I.; Waldner, F.; Caccetta, P.; Wu, C.: Resunet-a: a deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogramm. Remote. Sens. 162, 94–114 (2020)CrossRef Diakogiannis, F.I.; Waldner, F.; Caccetta, P.; Wu, C.: Resunet-a: a deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogramm. Remote. Sens. 162, 94–114 (2020)CrossRef
Metadaten
Titel
Semantic Segmentation of Satellite Images Using Deep-Unet
verfasst von
Ningthoujam Johny Singh
Kishorjit Nongmeikapam
Publikationsdatum
21.03.2022
Verlag
Springer Berlin Heidelberg
Erschienen in
Arabian Journal for Science and Engineering / Ausgabe 2/2023
Print ISSN: 2193-567X
Elektronische ISSN: 2191-4281
DOI
https://doi.org/10.1007/s13369-022-06734-4

Weitere Artikel der Ausgabe 2/2023

Arabian Journal for Science and Engineering 2/2023 Zur Ausgabe

Research Article-Computer Engineering and Computer Science

Stacked-Based Ensemble Machine Learning Model for Positioning Footballer

Research Article-Computer Engineering and Computer Science

Deep Content Information Retrieval for COVID-19 Detection from Chromatic CT Scans

RESEARCH ARTICLE - SPECIAL ISSUE - Frontiers in Parallel Programming Models for Fog and Edge Computing Infrastructures

A3C: Access Appropriate Analogous Computing for Cloud-Assisted Edge Users

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.