Skip to main content
Top

2020 | OriginalPaper | Chapter

Driveable Area Detection Using Semantic Segmentation Deep Neural Network

Authors : P. Subhasree, P. Karthikeyan, R. Senthilnathan

Published in: Computational Intelligence in Data Science

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Autonomous vehicles use road images to detect roads, identify lanes, objects around the vehicle and other important pieces of information. This information retrieved from the road data helps in making appropriate driving decisions for autonomous vehicles. Road segmentation is such a technique that segments the road from the image. Many deep learning networks developed for semantic segmentation can be fine-tuned for road segmentation. The paper presents details of the segmentation of the driveable area from the road image using a semantic segmentation network. The semantic segmentation network used segments road into the driveable and alternate area separately. Driveable area and alternately driveable area on a road are semantically different, but it is a difficult computer vision task to differentiate between them since they are similar in texture, color, and other important features. However, due to the development of advanced Deep Convolutional Neural Networks and road datasets, the differentiation was possible. A result achieved in detecting the driveable area using a semantic segmentation network, DeepLab, on the Berkley Deep Drive dataset is reported.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Levinson, J., et al.: Towards fully autonomous driving: systems and algorithms. In: IEEE Intelligent Vehicles Symposium (IV). IEEE (2011) Levinson, J., et al.: Towards fully autonomous driving: systems and algorithms. In: IEEE Intelligent Vehicles Symposium (IV). IEEE (2011)
2.
go back to reference Kim, J., Park, C.: End-to-end ego lane estimation based on sequential transfer learning for self-driving cars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017) Kim, J., Park, C.: End-to-end ego lane estimation based on sequential transfer learning for self-driving cars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017)
3.
go back to reference Ohn-Bar, E., Trivedi, M.M.: Are all objects equal? Deep spatio-temporal importance prediction in driving videos. Pattern Recogn. 64, 425–436 (2017) Ohn-Bar, E., Trivedi, M.M.: Are all objects equal? Deep spatio-temporal importance prediction in driving videos. Pattern Recogn. 64, 425–436 (2017)
4.
go back to reference Yu, F., et al.: BDD100K- a diverse driving video database with scalable annotation tooling. arXiv (2018) Yu, F., et al.: BDD100K- a diverse driving video database with scalable annotation tooling. arXiv (2018)
5.
go back to reference Máttyus, G., Wang, S., Fidler, S., Urtasun, R.: HD maps: fine-grained road segmentation by parsing ground and aerial images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3611–3619 (2016) Máttyus, G., Wang, S., Fidler, S., Urtasun, R.: HD maps: fine-grained road segmentation by parsing ground and aerial images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3611–3619 (2016)
6.
go back to reference Caltagirone, L., Bellone, M., Svensson, L., Wahde, M.: LIDAR–camera fusion for road detection using fully convolutional neural networks. Robot. Auton. Syst. 111, 25–31 (2019)CrossRef Caltagirone, L., Bellone, M., Svensson, L., Wahde, M.: LIDAR–camera fusion for road detection using fully convolutional neural networks. Robot. Auton. Syst. 111, 25–31 (2019)CrossRef
7.
go back to reference Yang, X., Li, X., Ye, Y., Lau, R.Y., Zhang, X., Huang, X.: Road detection and centerline extraction via deep recurrent convolutional neural network U-Net. IEEE Trans. Geosci. Remote Sens. 57(9), 7209–7220 (2019)CrossRef Yang, X., Li, X., Ye, Y., Lau, R.Y., Zhang, X., Huang, X.: Road detection and centerline extraction via deep recurrent convolutional neural network U-Net. IEEE Trans. Geosci. Remote Sens. 57(9), 7209–7220 (2019)CrossRef
8.
go back to reference Xiao, L., Wang, R., Dai, B., Fang, Y., Liu, D., Wu, T.: Hybrid conditional random field based camera-LIDAR fusion for road detection. Inf. Sci. 432, 543–558 (2018)MathSciNetCrossRef Xiao, L., Wang, R., Dai, B., Fang, Y., Liu, D., Wu, T.: Hybrid conditional random field based camera-LIDAR fusion for road detection. Inf. Sci. 432, 543–558 (2018)MathSciNetCrossRef
9.
go back to reference Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015) Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)
10.
go back to reference Noh, H., Hong, S., Han, B.: Learning deconvolution network for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, 1520–1528 (2015) Noh, H., Hong, S., Han, B.: Learning deconvolution network for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, 1520–1528 (2015)
11.
go back to reference Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision And Pattern Recognition, 580–587 (2014) Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision And Pattern Recognition, 580–587 (2014)
12.
go back to reference Lin, G., Milan, A., Shen, C., Reid, I.: RefineNet - multi-path refinement networks for high-resolution semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1925–1934 (2017) Lin, G., Milan, A., Shen, C., Reid, I.: RefineNet - multi-path refinement networks for high-resolution semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1925–1934 (2017)
13.
go back to reference Luc, P., Couprie, C., Chintala, S., Verbeek, J.: Semantic segmentation using adversarial networks. arXiv preprint arXiv:1611.08408 (2016) Luc, P., Couprie, C., Chintala, S., Verbeek, J.: Semantic segmentation using adversarial networks. arXiv preprint arXiv:​1611.​08408 (2016)
14.
go back to reference Wang, P., et al.: Understanding convolution for semantic segmentation. In: IEEE Winter Conference on Applications of Computer Vision, pp. 1451–1460 (2018) Wang, P., et al.: Understanding convolution for semantic segmentation. In: IEEE Winter Conference on Applications of Computer Vision, pp. 1451–1460 (2018)
16.
go back to reference Wei, Y., Xiao, H., Shi, H., Jie, Z., Feng, J., Huang, T.S.: Revisiting dilated convolution: a simple approach for weakly-and semi-supervised semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7268–7277. (2018) Wei, Y., Xiao, H., Shi, H., Jie, Z., Feng, J., Huang, T.S.: Revisiting dilated convolution: a simple approach for weakly-and semi-supervised semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7268–7277. (2018)
17.
go back to reference Zhao, H., Qi, X., Shen, X., Shi, J., Jia, J.: ICNET for real-time semantic segmentation on high-resolution images. In: Proceedings of the European Conference on Computer Vision, pp. 405–420 (2018) Zhao, H., Qi, X., Shen, X., Shi, J., Jia, J.: ICNET for real-time semantic segmentation on high-resolution images. In: Proceedings of the European Conference on Computer Vision, pp. 405–420 (2018)
20.
go back to reference Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3213–3223 (2016) Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3213–3223 (2016)
21.
go back to reference Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)CrossRef Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)CrossRef
22.
go back to reference Huang, X., Wang, P., Cheng, X., Zhou, D., Geng, Q, Yang, R.: The apolloscape open dataset for autonomous driving and its application. arXiv preprint arXiv:1803.06184 (2018) Huang, X., Wang, P., Cheng, X., Zhou, D., Geng, Q, Yang, R.: The apolloscape open dataset for autonomous driving and its application. arXiv preprint arXiv:​1803.​06184 (2018)
23.
go back to reference Maddern, W., et al.: 1 year, 1000 km - the Oxford robot car dataset. Int. J. Robot. Res. 36(1), 3–15 (2017)CrossRef Maddern, W., et al.: 1 year, 1000 km - the Oxford robot car dataset. Int. J. Robot. Res. 36(1), 3–15 (2017)CrossRef
24.
go back to reference Chen, L.-C., et al.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018)CrossRef Chen, L.-C., et al.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018)CrossRef
25.
go back to reference Chen, L.C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017) Chen, L.C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:​1706.​05587 (2017)
Metadata
Title
Driveable Area Detection Using Semantic Segmentation Deep Neural Network
Authors
P. Subhasree
P. Karthikeyan
R. Senthilnathan
Copyright Year
2020
DOI
https://doi.org/10.1007/978-3-030-63467-4_18

Premium Partner