Skip to main content
Erschienen in: Optical Memory and Neural Networks 4/2023

01.12.2023

ASE-UNet: An Orange Fruit Segmentation Model in an Agricultural Environment Based on Deep Learning

verfasst von: Changgeng Yu, Dashi Lin, Chaowen He

Erschienen in: Optical Memory and Neural Networks | Ausgabe 4/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Fruit picking robot requires a powerful vision system that can accurately identify the fruit on the tree. Accurate segmentation of orange fruit in orchards is challenging because of the complex environments due to the overlapping of fruits and occlusions from foliage. In this work, we proposed an image segmentation model called ASE-UNet based on the U-Net architecture, which can achieve accurate segmentation of oranges in complex environments. Firstly, the backbone network structure is improved to reduce the down-sampling rate of orange fruit images, thereby retaining more spatial detail information. Secondly, we introduced the Shape Feature Extraction Module (SFEM), which at enhancing the ability of the model to distinguish between the fruits and backgrounds, such as branches and leaves, by extracting shape and outline information from the orange fruit target. Finally, an attention mechanism was utilized to suppress background channel feature interference in the skip connection and improve the fusion of high-layer and low-layer features. We evaluate the proposed model on the orange fruit images dataset collected in the agricultural environment. The results showed that ASE-UNet achieves IoU, Precision, Recall, and F1-scores of 90.03, 96.10, 93.45, and 94.75%, respectively, which outperform other semantic segmentation methods, such as U-Net, PSPNet, and DeepLabv3+. The proposed method effectively solves the problem of low accuracy fruit segmentation models in the agricultural environment and provides technical support for fruit picking robots.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Yuan, B. and Chen, C., Evolution of a development model for fruit industry against background of rising labor cost: intensive or extensive adjustment?, Sustainability, 2019, vol. 11, no. 14, p. 3864.CrossRef Yuan, B. and Chen, C., Evolution of a development model for fruit industry against background of rising labor cost: intensive or extensive adjustment?, Sustainability, 2019, vol. 11, no. 14, p. 3864.CrossRef
2.
Zurück zum Zitat Wang, Z., Xun, Y., Wang, Y., and Yang, Q., Review of smart robots for fruit and vegetable picking in agriculture, Int. J. Agric. Biol. Eng., 2022, vol. 15, no. 1, pp. 33–54. Wang, Z., Xun, Y., Wang, Y., and Yang, Q., Review of smart robots for fruit and vegetable picking in agriculture, Int. J. Agric. Biol. Eng., 2022, vol. 15, no. 1, pp. 33–54.
3.
Zurück zum Zitat Gill, H.S., Murugesan, G., Khehra, B.S., Sajja, G.S., Gupta, G., and Bhatt, A., Fruit recognition from images using deep learning applications, Multimedia Tools Appl., 2022, vol. 81, no. 23, pp. 33269–33290.CrossRef Gill, H.S., Murugesan, G., Khehra, B.S., Sajja, G.S., Gupta, G., and Bhatt, A., Fruit recognition from images using deep learning applications, Multimedia Tools Appl., 2022, vol. 81, no. 23, pp. 33269–33290.CrossRef
4.
Zurück zum Zitat Kamilaris, A. and Prenafeta-Boldú, F.X., Deep learning in agriculture: A survey, Comput. Electron. Agric., 2018, vol. 147, pp. 70–90.CrossRef Kamilaris, A. and Prenafeta-Boldú, F.X., Deep learning in agriculture: A survey, Comput. Electron. Agric., 2018, vol. 147, pp. 70–90.CrossRef
5.
Zurück zum Zitat Liu, T.H., Nie, X.N., Wu, J.M., Zhang, D., Liu, W., Cheng, Y.F., and Qi, L., Pineapple (Ananas comosus) fruit detection and localization in natural environment based on binocular stereo vision and improved YOLOv3 model, Precis. Agric., 2023, vol. 24, no. 1, pp. 139–160.CrossRef Liu, T.H., Nie, X.N., Wu, J.M., Zhang, D., Liu, W., Cheng, Y.F., and Qi, L., Pineapple (Ananas comosus) fruit detection and localization in natural environment based on binocular stereo vision and improved YOLOv3 model, Precis. Agric., 2023, vol. 24, no. 1, pp. 139–160.CrossRef
6.
Zurück zum Zitat Mirhaji, H., Soleymani, M., Asakereh, A., and Mehdizadeh, S.A., Fruit detection and load estimation of an orange orchard using the YOLO models through simple approaches in different imaging and illumination conditions, Comput. Electron. Agric., 2021, vol. 191, p. 106533.CrossRef Mirhaji, H., Soleymani, M., Asakereh, A., and Mehdizadeh, S.A., Fruit detection and load estimation of an orange orchard using the YOLO models through simple approaches in different imaging and illumination conditions, Comput. Electron. Agric., 2021, vol. 191, p. 106533.CrossRef
7.
Zurück zum Zitat Zhang, J., Karkee, M., Zhang, Q., Zhang, X., Yaqoob, M., Fu, L., and Wang, S., Multi-class object detection using faster R-CNN and estimation of shaking locations for automated shake-and-catch apple harvesting, Comput. Electron. Agric., 2020, vol. 173, p. 105384.CrossRef Zhang, J., Karkee, M., Zhang, Q., Zhang, X., Yaqoob, M., Fu, L., and Wang, S., Multi-class object detection using faster R-CNN and estimation of shaking locations for automated shake-and-catch apple harvesting, Comput. Electron. Agric., 2020, vol. 173, p. 105384.CrossRef
8.
Zurück zum Zitat Tu, S., Pang, J., Liu, H., Zhuang, N., Chen, Y., Zheng, C., and Xue, Y., Passion fruit detection and counting based on multiple scale faster R-CNN using RGB-D images, Precis. Agric., 2020, vol. 21, pp. 1072–1091.CrossRef Tu, S., Pang, J., Liu, H., Zhuang, N., Chen, Y., Zheng, C., and Xue, Y., Passion fruit detection and counting based on multiple scale faster R-CNN using RGB-D images, Precis. Agric., 2020, vol. 21, pp. 1072–1091.CrossRef
9.
Zurück zum Zitat Farhadi, A. and Redmon, J., Yolov3: An incremental improvement, IEEE/CVF Comput. Vision Pattern Recognit., IEEE, 2018, vol. 1804, pp. 1–6. Farhadi, A. and Redmon, J., Yolov3: An incremental improvement, IEEE/CVF Comput. Vision Pattern Recognit., IEEE, 2018, vol. 1804, pp. 1–6.
10.
Zurück zum Zitat Girshick, R., Fast r-cnn, Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1440–1448. Girshick, R., Fast r-cnn, Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1440–1448.
11.
Zurück zum Zitat Bochkovskiy, A., Wang, C.Y., and Liao, H.Y.M., Yolov4: Optimal speed and accuracy of object detection, arXiv preprint arXiv:2004.10934, 2020. Bochkovskiy, A., Wang, C.Y., and Liao, H.Y.M., Yolov4: Optimal speed and accuracy of object detection, arXiv preprint arXiv:2004.10934, 2020.
12.
Zurück zum Zitat Asgari Taghanaki, S., Abhishek, K., Cohen, J.P., Cohen-Adad, J., and Hamarneh, G., Deep semantic segmentation of natural and medical images: a review, Artif. Intell. Rev., 2021, vol. 54, pp. 137–178.CrossRef Asgari Taghanaki, S., Abhishek, K., Cohen, J.P., Cohen-Adad, J., and Hamarneh, G., Deep semantic segmentation of natural and medical images: a review, Artif. Intell. Rev., 2021, vol. 54, pp. 137–178.CrossRef
13.
Zurück zum Zitat Anagnostis, A., Tagarakis, A.C., Kateris, D., Moysiadis, V., Sørensen, C.G., Pearson, S., and Bochtis, D., Orchard mapping with deep learning semantic segmentation, Sensors, 2021, vol. 21, no. 11, p. 3813.CrossRef Anagnostis, A., Tagarakis, A.C., Kateris, D., Moysiadis, V., Sørensen, C.G., Pearson, S., and Bochtis, D., Orchard mapping with deep learning semantic segmentation, Sensors, 2021, vol. 21, no. 11, p. 3813.CrossRef
14.
Zurück zum Zitat Wang, Y., Lv, J., Xu, L., Gu, Y., Zou, L., and Ma, Z., A segmentation method for waxberry image under orchard environment, Sci. Hortic., 2020, vol. 266, p. 109309.CrossRef Wang, Y., Lv, J., Xu, L., Gu, Y., Zou, L., and Ma, Z., A segmentation method for waxberry image under orchard environment, Sci. Hortic., 2020, vol. 266, p. 109309.CrossRef
15.
Zurück zum Zitat Kestur, R., Meduri, A., and Narasipura, O., MangoNet: A deep semantic segmentation architecture for a method to detect and count mangoes in an open orchard, Eng. Appl. Artif. Intell., 2019, vol. 77, pp. 59–69.CrossRef Kestur, R., Meduri, A., and Narasipura, O., MangoNet: A deep semantic segmentation architecture for a method to detect and count mangoes in an open orchard, Eng. Appl. Artif. Intell., 2019, vol. 77, pp. 59–69.CrossRef
16.
Zurück zum Zitat Tian, Y., Yang, G., Wang, Z., Li, E., and Liang, Z., Instance segmentation of apple flowers using the improved mask R–CNN model, Biosyst. Eng., 2020, vol. 193, pp. 264–278.CrossRef Tian, Y., Yang, G., Wang, Z., Li, E., and Liang, Z., Instance segmentation of apple flowers using the improved mask R–CNN model, Biosyst. Eng., 2020, vol. 193, pp. 264–278.CrossRef
17.
Zurück zum Zitat Li, Q., Jia, W., Sun, M., Hou, S., and Zheng, Y., A novel green apple segmentation algorithm based on ensemble U-Net under complex orchard environment, Comput. Electron. Agric., 2021, vol. 180, p. 105900.CrossRef Li, Q., Jia, W., Sun, M., Hou, S., and Zheng, Y., A novel green apple segmentation algorithm based on ensemble U-Net under complex orchard environment, Comput. Electron. Agric., 2021, vol. 180, p. 105900.CrossRef
18.
Zurück zum Zitat Ronneberger, O., Fischer, P., and Brox, T., U-net: Convolutional networks for biomedical image segmentation, Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, 2015, pp. 234–241. Ronneberger, O., Fischer, P., and Brox, T., U-net: Convolutional networks for biomedical image segmentation, Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, 2015, pp. 234–241.
19.
Zurück zum Zitat Sun, K., Wang, X., Liu, S., and Liu, C., Apple, peach, and pear flower detection using semantic segmentation network and shape constraint level set, Comput. Electron. Agric., vol. 185, p. 106150. Sun, K., Wang, X., Liu, S., and Liu, C., Apple, peach, and pear flower detection using semantic segmentation network and shape constraint level set, Comput. Electron. Agric., vol. 185, p. 106150.
20.
Zurück zum Zitat Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., and Hu, Q., ECA-Net: Efficient channel attention for deep convolutional neural networks, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2020, pp. 11534–11542. Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., and Hu, Q., ECA-Net: Efficient channel attention for deep convolutional neural networks, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2020, pp. 11534–11542.
21.
Zurück zum Zitat Takikawa, T., Acuna, D., Jampani, V., and Fidler, S., Gated-scnn: Gated shape cnns for semantic segmentation, Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 5229–5238. Takikawa, T., Acuna, D., Jampani, V., and Fidler, S., Gated-scnn: Gated shape cnns for semantic segmentation, Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 5229–5238.
22.
Zurück zum Zitat Zhao, H., Shi, J., Qi, X., Wang, X., and Jia, J., Pyramid scene parsing network, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2881–2890. Zhao, H., Shi, J., Qi, X., Wang, X., and Jia, J., Pyramid scene parsing network, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2881–2890.
23.
Zurück zum Zitat Badrinarayanan, V., Kendall, A., and Cipolla, R., Segnet: A deep convolutional encoder-decoder architecture for image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, vol. 39, no. 12, pp. 2481–2495.CrossRef Badrinarayanan, V., Kendall, A., and Cipolla, R., Segnet: A deep convolutional encoder-decoder architecture for image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, vol. 39, no. 12, pp. 2481–2495.CrossRef
24.
Zurück zum Zitat Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H., Encoder-decoder with atrous separable convolution for semantic image segmentation, Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 801–818. Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H., Encoder-decoder with atrous separable convolution for semantic image segmentation, Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 801–818.
Metadaten
Titel
ASE-UNet: An Orange Fruit Segmentation Model in an Agricultural Environment Based on Deep Learning
verfasst von
Changgeng Yu
Dashi Lin
Chaowen He
Publikationsdatum
01.12.2023
Verlag
Pleiades Publishing
Erschienen in
Optical Memory and Neural Networks / Ausgabe 4/2023
Print ISSN: 1060-992X
Elektronische ISSN: 1934-7898
DOI
https://doi.org/10.3103/S1060992X23040045

Weitere Artikel der Ausgabe 4/2023

Optical Memory and Neural Networks 4/2023 Zur Ausgabe

Premium Partner