Skip to main content

2020 | OriginalPaper | Buchkapitel

Driveable Area Detection Using Semantic Segmentation Deep Neural Network

verfasst von : P. Subhasree, P. Karthikeyan, R. Senthilnathan

Erschienen in: Computational Intelligence in Data Science

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Autonomous vehicles use road images to detect roads, identify lanes, objects around the vehicle and other important pieces of information. This information retrieved from the road data helps in making appropriate driving decisions for autonomous vehicles. Road segmentation is such a technique that segments the road from the image. Many deep learning networks developed for semantic segmentation can be fine-tuned for road segmentation. The paper presents details of the segmentation of the driveable area from the road image using a semantic segmentation network. The semantic segmentation network used segments road into the driveable and alternate area separately. Driveable area and alternately driveable area on a road are semantically different, but it is a difficult computer vision task to differentiate between them since they are similar in texture, color, and other important features. However, due to the development of advanced Deep Convolutional Neural Networks and road datasets, the differentiation was possible. A result achieved in detecting the driveable area using a semantic segmentation network, DeepLab, on the Berkley Deep Drive dataset is reported.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Levinson, J., et al.: Towards fully autonomous driving: systems and algorithms. In: IEEE Intelligent Vehicles Symposium (IV). IEEE (2011) Levinson, J., et al.: Towards fully autonomous driving: systems and algorithms. In: IEEE Intelligent Vehicles Symposium (IV). IEEE (2011)
2.
Zurück zum Zitat Kim, J., Park, C.: End-to-end ego lane estimation based on sequential transfer learning for self-driving cars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017) Kim, J., Park, C.: End-to-end ego lane estimation based on sequential transfer learning for self-driving cars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017)
3.
Zurück zum Zitat Ohn-Bar, E., Trivedi, M.M.: Are all objects equal? Deep spatio-temporal importance prediction in driving videos. Pattern Recogn. 64, 425–436 (2017) Ohn-Bar, E., Trivedi, M.M.: Are all objects equal? Deep spatio-temporal importance prediction in driving videos. Pattern Recogn. 64, 425–436 (2017)
4.
Zurück zum Zitat Yu, F., et al.: BDD100K- a diverse driving video database with scalable annotation tooling. arXiv (2018) Yu, F., et al.: BDD100K- a diverse driving video database with scalable annotation tooling. arXiv (2018)
5.
Zurück zum Zitat Máttyus, G., Wang, S., Fidler, S., Urtasun, R.: HD maps: fine-grained road segmentation by parsing ground and aerial images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3611–3619 (2016) Máttyus, G., Wang, S., Fidler, S., Urtasun, R.: HD maps: fine-grained road segmentation by parsing ground and aerial images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3611–3619 (2016)
6.
Zurück zum Zitat Caltagirone, L., Bellone, M., Svensson, L., Wahde, M.: LIDAR–camera fusion for road detection using fully convolutional neural networks. Robot. Auton. Syst. 111, 25–31 (2019)CrossRef Caltagirone, L., Bellone, M., Svensson, L., Wahde, M.: LIDAR–camera fusion for road detection using fully convolutional neural networks. Robot. Auton. Syst. 111, 25–31 (2019)CrossRef
7.
Zurück zum Zitat Yang, X., Li, X., Ye, Y., Lau, R.Y., Zhang, X., Huang, X.: Road detection and centerline extraction via deep recurrent convolutional neural network U-Net. IEEE Trans. Geosci. Remote Sens. 57(9), 7209–7220 (2019)CrossRef Yang, X., Li, X., Ye, Y., Lau, R.Y., Zhang, X., Huang, X.: Road detection and centerline extraction via deep recurrent convolutional neural network U-Net. IEEE Trans. Geosci. Remote Sens. 57(9), 7209–7220 (2019)CrossRef
8.
Zurück zum Zitat Xiao, L., Wang, R., Dai, B., Fang, Y., Liu, D., Wu, T.: Hybrid conditional random field based camera-LIDAR fusion for road detection. Inf. Sci. 432, 543–558 (2018)MathSciNetCrossRef Xiao, L., Wang, R., Dai, B., Fang, Y., Liu, D., Wu, T.: Hybrid conditional random field based camera-LIDAR fusion for road detection. Inf. Sci. 432, 543–558 (2018)MathSciNetCrossRef
9.
Zurück zum Zitat Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015) Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)
10.
Zurück zum Zitat Noh, H., Hong, S., Han, B.: Learning deconvolution network for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, 1520–1528 (2015) Noh, H., Hong, S., Han, B.: Learning deconvolution network for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, 1520–1528 (2015)
11.
Zurück zum Zitat Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision And Pattern Recognition, 580–587 (2014) Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision And Pattern Recognition, 580–587 (2014)
12.
Zurück zum Zitat Lin, G., Milan, A., Shen, C., Reid, I.: RefineNet - multi-path refinement networks for high-resolution semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1925–1934 (2017) Lin, G., Milan, A., Shen, C., Reid, I.: RefineNet - multi-path refinement networks for high-resolution semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1925–1934 (2017)
13.
Zurück zum Zitat Luc, P., Couprie, C., Chintala, S., Verbeek, J.: Semantic segmentation using adversarial networks. arXiv preprint arXiv:1611.08408 (2016) Luc, P., Couprie, C., Chintala, S., Verbeek, J.: Semantic segmentation using adversarial networks. arXiv preprint arXiv:​1611.​08408 (2016)
14.
Zurück zum Zitat Wang, P., et al.: Understanding convolution for semantic segmentation. In: IEEE Winter Conference on Applications of Computer Vision, pp. 1451–1460 (2018) Wang, P., et al.: Understanding convolution for semantic segmentation. In: IEEE Winter Conference on Applications of Computer Vision, pp. 1451–1460 (2018)
16.
Zurück zum Zitat Wei, Y., Xiao, H., Shi, H., Jie, Z., Feng, J., Huang, T.S.: Revisiting dilated convolution: a simple approach for weakly-and semi-supervised semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7268–7277. (2018) Wei, Y., Xiao, H., Shi, H., Jie, Z., Feng, J., Huang, T.S.: Revisiting dilated convolution: a simple approach for weakly-and semi-supervised semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7268–7277. (2018)
17.
Zurück zum Zitat Zhao, H., Qi, X., Shen, X., Shi, J., Jia, J.: ICNET for real-time semantic segmentation on high-resolution images. In: Proceedings of the European Conference on Computer Vision, pp. 405–420 (2018) Zhao, H., Qi, X., Shen, X., Shi, J., Jia, J.: ICNET for real-time semantic segmentation on high-resolution images. In: Proceedings of the European Conference on Computer Vision, pp. 405–420 (2018)
20.
Zurück zum Zitat Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3213–3223 (2016) Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3213–3223 (2016)
21.
Zurück zum Zitat Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)CrossRef Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)CrossRef
22.
Zurück zum Zitat Huang, X., Wang, P., Cheng, X., Zhou, D., Geng, Q, Yang, R.: The apolloscape open dataset for autonomous driving and its application. arXiv preprint arXiv:1803.06184 (2018) Huang, X., Wang, P., Cheng, X., Zhou, D., Geng, Q, Yang, R.: The apolloscape open dataset for autonomous driving and its application. arXiv preprint arXiv:​1803.​06184 (2018)
23.
Zurück zum Zitat Maddern, W., et al.: 1 year, 1000 km - the Oxford robot car dataset. Int. J. Robot. Res. 36(1), 3–15 (2017)CrossRef Maddern, W., et al.: 1 year, 1000 km - the Oxford robot car dataset. Int. J. Robot. Res. 36(1), 3–15 (2017)CrossRef
24.
Zurück zum Zitat Chen, L.-C., et al.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018)CrossRef Chen, L.-C., et al.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018)CrossRef
25.
Zurück zum Zitat Chen, L.C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017) Chen, L.C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:​1706.​05587 (2017)
Metadaten
Titel
Driveable Area Detection Using Semantic Segmentation Deep Neural Network
verfasst von
P. Subhasree
P. Karthikeyan
R. Senthilnathan
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-63467-4_18