Swin‐YOLOX for autonomous and accurate drone visual landing.

Gespeichert in:
Bibliographische Detailangaben
Titel: Swin‐YOLOX for autonomous and accurate drone visual landing.
Autoren: Chen, Rongbin, Xu, Ying, Sinal, Mohamad Sabri bin, Zhong, Dongsheng, Li, Xinru, Li, Bo, Guo, Yadong, Luo, Qingjia
Quelle: IET Image Processing (Wiley-Blackwell); 12/11/2024, Vol. 18 Issue 14, p4731-4744, 14p
Schlagwörter: COMPUTER vision, TRANSFORMER models, IMAGE processing, DATABASES, REMOTE sensing
Abstract: As UAVs are more and more widely used in military and civilian fields, their intelligent applications have also been developed rapidly. However, high‐precision autonomous landing is still an industry challenge. GPS‐based methods will not work in places where GPS signals are not available; multi‐sensor combination navigation is difficult to be widely used because of the high equipment requirements; traditional vision‐based methods are sensitive to scale transformation, background complexity and occlusion, which affect the detection performance. In this paper, we address these problems and apply deep learning methods to target detection in the UAV landing phase. Firstly, we optimize the backbone network of YOLOX and propose the Swin Transformer based YOLOX (Swin‐YOLOX) UAV landing visual positioning algorithm. Secondly, based on the UAV‐VPD database, a batch of actual acquisition data is added to build the UAV‐VPDV2 database by AI annotation method. And finally, the RBN data batch normalization method is used to improve the performance of the model in extracting effective features from the data. Extensive experiments have shown that the AP50 of the proposed method can reach 98.7%, which is superior to other detection models, with a detection speed of 38.4 frames/second, and can meet the requirements of real‐time detection. [ABSTRACT FROM AUTHOR]
Copyright of IET Image Processing (Wiley-Blackwell) is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Datenbank: Biomedical Index
Beschreibung
Abstract:As UAVs are more and more widely used in military and civilian fields, their intelligent applications have also been developed rapidly. However, high‐precision autonomous landing is still an industry challenge. GPS‐based methods will not work in places where GPS signals are not available; multi‐sensor combination navigation is difficult to be widely used because of the high equipment requirements; traditional vision‐based methods are sensitive to scale transformation, background complexity and occlusion, which affect the detection performance. In this paper, we address these problems and apply deep learning methods to target detection in the UAV landing phase. Firstly, we optimize the backbone network of YOLOX and propose the Swin Transformer based YOLOX (Swin‐YOLOX) UAV landing visual positioning algorithm. Secondly, based on the UAV‐VPD database, a batch of actual acquisition data is added to build the UAV‐VPDV2 database by AI annotation method. And finally, the RBN data batch normalization method is used to improve the performance of the model in extracting effective features from the data. Extensive experiments have shown that the AP50 of the proposed method can reach 98.7%, which is superior to other detection models, with a detection speed of 38.4 frames/second, and can meet the requirements of real‐time detection. [ABSTRACT FROM AUTHOR]
ISSN:17519659
DOI:10.1049/ipr2.13282