Phenotyping grapevine red blotch virus and grapevine leafroll-associated viruses before and after symptom expression through machine-learning analysis of hyperspectral images

Grapevine leafroll-associated viruses (GLRaVs) and grapevine red blotch virus (GRBV) cause substantial economic losses and concern to North America's grape and wine industries. Fast and accurate identification of these two groups of viruses is key to informing disease management strategies and...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Frontiers in plant science Ročník 14; s. 1117869
Hlavní autori: Sawyer, Erica, Laroche-Pinel, Eve, Flasco, Madison, Cooper, Monica L., Corrales, Benjamin, Fuchs, Marc, Brillante, Luca
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Switzerland Frontiers Media SA 10.03.2023
Frontiers Media S.A
Predmet:
ISSN:1664-462X, 1664-462X
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Grapevine leafroll-associated viruses (GLRaVs) and grapevine red blotch virus (GRBV) cause substantial economic losses and concern to North America's grape and wine industries. Fast and accurate identification of these two groups of viruses is key to informing disease management strategies and limiting their spread by insect vectors in the vineyard. Hyperspectral imaging offers new opportunities for virus disease scouting. Here we used two machine learning methods, i.e., Random Forest (RF) and 3D-Convolutional Neural Network (CNN), to identify and distinguish leaves from red blotch-infected vines, leafroll-infected vines, and vines co-infected with both viruses using spatiospectral information in the visible domain (510-710nm). We captured hyperspectral images of about 500 leaves from 250 vines at two sampling times during the growing season (a pre-symptomatic stage at veraison and a symptomatic stage at mid-ripening). Concurrently, viral infections were determined in leaf petioles by polymerase chain reaction (PCR) based assays using virus-specific primers and by visual assessment of disease symptoms. When binarily classifying infected vs. non-infected leaves, the CNN model reaches an overall maximum accuracy of 87% versus 82.8% for the RF model. Using the symptomatic dataset lowers the rate of false negatives. Based on a multiclass categorization of leaves, the CNN and RF models had a maximum accuracy of 77.7% and 76.9% (averaged across both healthy and infected leaf categories). Both CNN and RF outperformed visual assessment of symptoms by experts when using RGB segmented images. Interpretation of the RF data showed that the most important wavelengths were in the green, orange, and red subregions. While differentiation between plants co-infected with GLRaVs and GRBV proved to be relatively challenging, both models showed promising accuracies across infection categories.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
This article was submitted to Sustainable and Intelligent Phytoprotection, a section of the journal Frontiers in Plant Science
These authors contributed equally to this work
Reviewed by: Paulo Adriano Zaini, University of California, Davis, CA, United States; Jeremy R. Thompson, Plant Health & Environment Laboratories (MPI), New Zealand
Edited by: Ning Yang, Jiangsu University, China
ISSN:1664-462X
1664-462X
DOI:10.3389/fpls.2023.1117869