An Efficient Sampling-Based Algorithms Using Active Learning and Manifold Learning for Multiple Unmanned Aerial Vehicle Task Allocation under Uncertainty

This paper presents a sampling-based approximation for multiple unmanned aerial vehicle (UAV) task allocation under uncertainty. Our goal is to reduce the amount of calculations and improve the accuracy of the algorithm. For this purpose, Gaussian process regression models are constructed from an un...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Sensors (Basel, Switzerland) Jg. 18; H. 8; S. 2645
Hauptverfasser: Fu, Xiaowei, Wang, Hui, Li, Bin, Gao, Xiaoguang
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Switzerland MDPI AG 12.08.2018
MDPI
Schlagworte:
ISSN:1424-8220, 1424-8220
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper presents a sampling-based approximation for multiple unmanned aerial vehicle (UAV) task allocation under uncertainty. Our goal is to reduce the amount of calculations and improve the accuracy of the algorithm. For this purpose, Gaussian process regression models are constructed from an uncertainty parameter and task reward sample set, and this training set is iteratively refined by active learning and manifold learning. Firstly, a manifold learning method is used to screen samples, and a sparse graph is constructed to represent the distribution of all samples through a small number of samples. Then, multi-points sampling is introduced into the active learning method to obtain the training set from the sparse graph quickly and efficiently. This proposed hybrid sampling strategy could select a limited number of representative samples to construct the training set. Simulation analyses demonstrate that our sampling-based algorithm can effectively get a high-precision evaluation model of the impact of uncertain parameters on task reward.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s18082645