Skip to main content

2018 | OriginalPaper | Buchkapitel

Shallow and Deep Model Investigation for Distinguishing Corn and Weeds

verfasst von : Yu Xia, Hongxun Yao, Xiaoshuai Sun, Yanhao Zhang

Erschienen in: Advances in Multimedia Information Processing – PCM 2017

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Nowadays, the development of agriculture is growing very fast. The yields of corn is also an important indicator and a great part in the agriculture, which makes automatic weeds removal a necessary and urgent task. There are many challenges to distinguish the corn and weed, the biggest one is the similarity in both color and shape between corn and weeds. The processing speed is also very important in practical application. In this paper, we investigate two methods to fulfill this task. The first one is training and computing the SIFT and HARRIS feature descriptors, then using the SVM classifier to distinguish the corn and weeds. The second one is an End-to-End solution based on the faster R-CNN model. In addition, we design a specific module in order to improve the processing speed and ensure the accuracy at the same time. The experiment results conducted on our dataset demonstrate that detection based on improved Faster R-CNN model can better handle the problem.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Wu, S.G., Bao, F.S., Xu, E.Y., Wang, Y., Chang, Y., Xiang, Q.: A leaf recognition algorithm for plant classification using probabilistic neural network, CoRR abs/0707.4289 (2007) Wu, S.G., Bao, F.S., Xu, E.Y., Wang, Y., Chang, Y., Xiang, Q.: A leaf recognition algorithm for plant classification using probabilistic neural network, CoRR abs/0707.4289 (2007)
2.
Zurück zum Zitat Jiang, Y., Ma, J.: Combination features and models for human detection. In: CVPR, pp. 240–248. IEEE Computer Society (2015) Jiang, Y., Ma, J.: Combination features and models for human detection. In: CVPR, pp. 240–248. IEEE Computer Society (2015)
3.
Zurück zum Zitat Fusek, R., Sojka, E., Mozdren, K., Surkala, M.: Energy-transfer features and their application in the task of face detection. In: AVSS, pp. 147–152. IEEE Computer Society (2013) Fusek, R., Sojka, E., Mozdren, K., Surkala, M.: Energy-transfer features and their application in the task of face detection. In: AVSS, pp. 147–152. IEEE Computer Society (2013)
4.
Zurück zum Zitat Wang, J., Zhu, H., Yu, S., Fan, C.: Object tracking using color-feature guided network generalization and tailored feature fusion. Neurocomputing 238, 387–398 (2017)CrossRef Wang, J., Zhu, H., Yu, S., Fan, C.: Object tracking using color-feature guided network generalization and tailored feature fusion. Neurocomputing 238, 387–398 (2017)CrossRef
5.
Zurück zum Zitat Girshick, R.B., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: CVPR, pp. 580–587. IEEE Computer Society (2014) Girshick, R.B., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: CVPR, pp. 580–587. IEEE Computer Society (2014)
6.
Zurück zum Zitat Girshick, R.B.: Fast R-CNN. In: ICCV, pp. 1440–1448. IEEE Computer Society (2015) Girshick, R.B.: Fast R-CNN. In: ICCV, pp. 1440–1448. IEEE Computer Society (2015)
7.
Zurück zum Zitat Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS, pp. 91–99 (2015) Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS, pp. 91–99 (2015)
8.
Zurück zum Zitat Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition, CoRR abs/1409.1556 (2014) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition, CoRR abs/1409.1556 (2014)
10.
Zurück zum Zitat Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You only look once: unified, real-time object detection. In: CVPR, pp. 779–788. IEEE Computer Society (2016) Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You only look once: unified, real-time object detection. In: CVPR, pp. 779–788. IEEE Computer Society (2016)
11.
Zurück zum Zitat Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRef Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRef
Metadaten
Titel
Shallow and Deep Model Investigation for Distinguishing Corn and Weeds
verfasst von
Yu Xia
Hongxun Yao
Xiaoshuai Sun
Yanhao Zhang
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-77380-3_66

Neuer Inhalt