TransFusionOdom: Transformer-based LiDAR-Inertial Fusion Odometry Estimation

Multi-modal fusion of sensors is a commonly used approach to enhance the performance of odometry estimation, which is also a fundamental module for mobile robots. Recently, learning-based approaches garner the attention in this field, due to their robust non-handcrafted designs. However, the questio...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE sensors journal Ročník 23; číslo 18; s. 1
Hlavní autoři: Sun, Leyuan, Ding, Guanqun, Qiu, Yue, Yoshiyasu, Yusuke, Kanehiro, Fumio
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York IEEE 15.09.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Institute of Electrical and Electronics Engineers
Témata:
ISSN:1530-437X, 1558-1748
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Multi-modal fusion of sensors is a commonly used approach to enhance the performance of odometry estimation, which is also a fundamental module for mobile robots. Recently, learning-based approaches garner the attention in this field, due to their robust non-handcrafted designs. However, the question of How to perform fusion among different modalities in a supervised sensor fusion odometry estimation task? is still one of challenging issues remains. Some simple operations, such as element-wise summation and concatenation, are not capable of assigning adaptive attentional weights to incorporate different modalities efficiently, which make it difficult to achieve competitive odometry results. Besides, the Transformer architecture has shown potential for multi-modal fusion tasks, particularly in the domains of vision with language. In this work, we propose an end-to-end supervised Transformer-based LiDAR-Inertial fusion framework (namely TransFusionOdom) for odometry estimation. The multi-attention fusion module demonstrates different fusion approaches for homogeneous and heterogeneous modalities to address the overfitting problem that can arise from blindly increasing the complexity of the model. Additionally, to interpret the learning process of the Transformer-based multi-modal interactions, a general visualization approach is introduced to illustrate the interactions between modalities. Moreover, exhaustive ablation studies evaluate different multi-modal fusion strategies to verify the performance of proposed fusion strategy. A synthetic multi-modal dataset is made public to validate the generalization ability of the proposed fusion strategy, which also works for other combinations of different modalities. The quantitative and qualitative odometry evaluations on the KITTI dataset verify the proposed TransFusionOdom can achieve superior performance compared with other learning-based related works.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2023.3302401