Learning Semantic Graphics Using Convolutional Encoder–Decoder Network for Autonomous Weeding in Paddy

Weeds in agricultural farms are aggressive growers which compete for nutrition and other resources with the crop and reduce production. The increasing use of chemicals to control them has inadvertent consequences to the human health and the environment. In this work, a novel neural network training...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Frontiers in plant science Jg. 10; S. 1404
Hauptverfasser: Adhikari, Shyam Prasad, Yang, Heechan, Kim, Hyongsuk
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Lausanne Frontiers Media SA 31.10.2019
Frontiers Media S.A
Schlagworte:
ISSN:1664-462X, 1664-462X
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Weeds in agricultural farms are aggressive growers which compete for nutrition and other resources with the crop and reduce production. The increasing use of chemicals to control them has inadvertent consequences to the human health and the environment. In this work, a novel neural network training method combining semantic graphics for data annotation and an advanced encoder–decoder network for (a) automatic crop line detection and (b) weed (wild millet) detection in paddy fields is proposed. The detected crop lines act as a guiding line for an autonomous weeding robot for inter-row weeding, whereas the detection of weeds enables autonomous intra-row weeding. The proposed data annotation method, semantic graphics, is intuitive, and the desired targets can be annotated easily with minimal labor. Also, the proposed “extended skip network” is an improved deep convolutional encoder–decoder neural network for efficient learning of semantic graphics. Quantitative evaluations of the proposed method demonstrated an increment of 6.29% and 6.14% in mean intersection over union (mIoU), over the baseline network on the task of paddy line detection and wild millet detection, respectively. The proposed method also leads to a 3.56% increment in mIoU and a significantly higher recall compared to a popular bounding box-based object detection approach on the task of wild–millet detection.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
Edited by: Kioumars Ghamkhar, AgResearch (New Zealand), New Zealand
Reviewed by: Christopher James Bateman, Lincoln Agritech Ltd, New Zealand; Dong Xu, University of Missouri, United States
This article was submitted to Technical Advances in Plant Science, a section of the journal Frontiers in Plant Science
ISSN:1664-462X
1664-462X
DOI:10.3389/fpls.2019.01404