TransFusionOdom: Transformer-based LiDAR-Inertial Fusion Odometry Estimation
Multi-modal fusion of sensors is a commonly used approach to enhance the performance of odometry estimation, which is also a fundamental module for mobile robots. Recently, learning-based approaches garner the attention in this field, due to their robust non-handcrafted designs. However, the questio...
Uložené v:
| Vydané v: | IEEE sensors journal Ročník 23; číslo 18; s. 1 |
|---|---|
| Hlavní autori: | , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
New York
IEEE
15.09.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Institute of Electrical and Electronics Engineers |
| Predmet: | |
| ISSN: | 1530-437X, 1558-1748 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | Multi-modal fusion of sensors is a commonly used approach to enhance the performance of odometry estimation, which is also a fundamental module for mobile robots. Recently, learning-based approaches garner the attention in this field, due to their robust non-handcrafted designs. However, the question of How to perform fusion among different modalities in a supervised sensor fusion odometry estimation task? is still one of challenging issues remains. Some simple operations, such as element-wise summation and concatenation, are not capable of assigning adaptive attentional weights to incorporate different modalities efficiently, which make it difficult to achieve competitive odometry results. Besides, the Transformer architecture has shown potential for multi-modal fusion tasks, particularly in the domains of vision with language. In this work, we propose an end-to-end supervised Transformer-based LiDAR-Inertial fusion framework (namely TransFusionOdom) for odometry estimation. The multi-attention fusion module demonstrates different fusion approaches for homogeneous and heterogeneous modalities to address the overfitting problem that can arise from blindly increasing the complexity of the model. Additionally, to interpret the learning process of the Transformer-based multi-modal interactions, a general visualization approach is introduced to illustrate the interactions between modalities. Moreover, exhaustive ablation studies evaluate different multi-modal fusion strategies to verify the performance of proposed fusion strategy. A synthetic multi-modal dataset is made public to validate the generalization ability of the proposed fusion strategy, which also works for other combinations of different modalities. The quantitative and qualitative odometry evaluations on the KITTI dataset verify the proposed TransFusionOdom can achieve superior performance compared with other learning-based related works. |
|---|---|
| AbstractList | Multi-modal fusion of sensors is a commonly used approach to enhance the performance of odometry estimation, which is also a fundamental module for mobile robots. Recently, learning-based approaches garner the attention in this field, due to their robust non-handcrafted designs. However, the question of How to perform fusion among different modalities in a supervised sensor fusion odometry estimation task? is still one of challenging issues remains. Some simple operations, such as element-wise summation and concatenation, are not capable of assigning adaptive attentional weights to incorporate different modalities efficiently, which make it difficult to achieve competitive odometry results. Besides, the Transformer architecture has shown potential for multi-modal fusion tasks, particularly in the domains of vision with language. In this work, we propose an end-to-end supervised Transformer-based LiDAR-Inertial fusion framework (namely TransFusionOdom) for odometry estimation. The multi-attention fusion module demonstrates different fusion approaches for homogeneous and heterogeneous modalities to address the overfitting problem that can arise from blindly increasing the complexity of the model. Additionally, to interpret the learning process of the Transformer-based multi-modal interactions, a general visualization approach is introduced to illustrate the interactions between modalities. Moreover, exhaustive ablation studies evaluate different multi-modal fusion strategies to verify the performance of proposed fusion strategy. A synthetic multi-modal dataset is made public to validate the generalization ability of the proposed fusion strategy, which also works for other combinations of different modalities. The quantitative and qualitative odometry evaluations on the KITTI dataset verify the proposed TransFusionOdom can achieve superior performance compared with other learning-based related works. Multimodal fusion of sensors is a commonly used approach to enhance the performance of odometry estimation, which is also a fundamental module for mobile robots. Recently, learning-based approaches garner the attention in this field, due to their robust nonhandcrafted designs. However, the question of How to perform fusion among different modalities in a supervised sensor fusion odometry estimation task? is one of the challenging issues still remaining. Some simple operations, such as elementwise summation and concatenation, are not capable of assigning adaptive attentional weights to incorporate different modalities efficiently, which makes it difficult to achieve competitive odometry results. Besides, the Transformer architecture has shown potential for multimodal fusion tasks, particularly in the domains of vision with language. In this work, we propose an end-to-end supervised Transformer-based LiDAR-Inertial fusion framework (namely TransFusionOdom) for odometry estimation. The multiattention fusion module demonstrates different fusion approaches for homogeneous and heterogeneous modalities to address the overfitting problem that can arise from blindly increasing the complexity of the model. Additionally, to interpret the learning process of the Transformer-based multimodal interactions, a general visualization approach is introduced to illustrate the interactions between modalities. Moreover, exhaustive ablation studies evaluate different multimodal fusion strategies to verify the performance of the proposed fusion strategy. A synthetic multimodal dataset is made public to validate the generalization ability of the proposed fusion strategy, which also works for other combinations of different modalities. The quantitative and qualitative odometry evaluations on the KITTI dataset verify that the proposed TransFusionOdom can achieve superior performance compared with other learning-based related works. |
| Author | Qiu, Yue Sun, Leyuan Kanehiro, Fumio Ding, Guanqun Yoshiyasu, Yusuke |
| Author_xml | – sequence: 1 givenname: Leyuan orcidid: 0000-0001-6123-9339 surname: Sun fullname: Sun, Leyuan organization: CNRS-AIST Joint Robotics Laboratory (JRL), IRL, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki, Japan – sequence: 2 givenname: Guanqun surname: Ding fullname: Ding, Guanqun organization: Digital Architecture Research Center (Di-giARC), National Institute of Advanced Industrial Science and Technology (AIST), Tokyo, Japan – sequence: 3 givenname: Yue surname: Qiu fullname: Qiu, Yue organization: Computer Vision Research Team, Artificial Intelligence Research Center (AIRC), National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki, Japan – sequence: 4 givenname: Yusuke orcidid: 0000-0002-0433-9832 surname: Yoshiyasu fullname: Yoshiyasu, Yusuke organization: Computer Vision Research Team, Artificial Intelligence Research Center (AIRC), National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki, Japan – sequence: 5 givenname: Fumio orcidid: 0000-0002-0277-3467 surname: Kanehiro fullname: Kanehiro, Fumio organization: CNRS-AIST Joint Robotics Laboratory (JRL), IRL, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki, Japan |
| BackLink | https://hal.science/hal-04745599$$DView record in HAL |
| BookMark | eNp9kF9LwzAUxYNMcJt-AMGHgk8-dN40ydL4NubmlOJAJ_gW0jbBjK6dSSfs29uuE8QHn-7l8Dv3zxmgXlmVGqFLDCOMQdw-vc6eRxFEZEQIRBTwCepjxuIQcxr32p5ASAl_P0MD79cAWHDG-yhZOVX6-c7bqlzm1eYuOAimchvtwlR5nQeJvZ-8hI-ldrVVRdDBQUvr2u2Dma_tRtWNdo5OjSq8vjjWIXqbz1bTRZgsHx6nkyTMiKB1qDXkBvOc5qkSIjOGK5amuSDQ6EaMI4UzE2cUC6bzPOYqAgppTDmlOAaKyRDddHM_VCG3rtnu9rJSVi4miWw1aFjGhPhq2euO3brqc6d9LdfVzpXNeTKKxwwEMAENxTsqc5X3ThuZ2frwU-2ULSQG2cYs25hlG7M8xtw48R_nz0H_ea46j9Va_-IjTBkek29SlInS |
| CODEN | ISJEAZ |
| CitedBy_id | crossref_primary_10_1016_j_robot_2024_104700 crossref_primary_10_1109_JSEN_2025_3586680 crossref_primary_10_1016_j_rineng_2024_103565 crossref_primary_10_1016_j_aei_2025_103135 crossref_primary_10_3390_s25061839 crossref_primary_10_1016_j_isprsjprs_2024_03_008 crossref_primary_10_1109_JSEN_2024_3442951 crossref_primary_10_1109_JSEN_2025_3562916 crossref_primary_10_3389_fcomp_2024_1486581 crossref_primary_10_1016_j_patcog_2025_111383 crossref_primary_10_1109_JSEN_2025_3530007 crossref_primary_10_1109_JSEN_2024_3386709 crossref_primary_10_1007_s10462_025_11187_w crossref_primary_10_3390_drones7120699 |
| Cites_doi | 10.1109/IROS47612.2022.9981107 10.1109/ICRA48506.2021.9561149 10.1109/TRO.2018.2853729 10.1109/LRA.2022.3141661 10.1109/TCE.2003.1233761 10.1109/CVPR.2019.00867 10.1109/CVPR46437.2021.00700 10.1109/ICIP40778.2020.9190664 10.1109/ISMAR.2014.6948420 10.1109/LRA.2020.3003256 10.1021/ac60214a047 10.3390/rs14205229 10.1109/TRO.2017.2705103 10.1109/ICCV.2015.336 10.1109/JSEN.2019.2910826 10.1109/CVPR52729.2023.02174 10.1007/978-3-031-02444-3_14 10.1109/CVPR52688.2022.01178 10.1016/j.neunet.2022.09.001 10.1109/ICCV48922.2021.00273 10.1109/CVPR52688.2022.01187 10.1109/ROBIO55434.2022.10011808 10.1109/CVPR.2019.01079 10.1109/ICASSP39728.2021.9413912 10.1109/IROS45743.2020.9341176 10.1109/JSEN.2020.3028561 10.1109/TAES.2022.3193085 10.1109/IROS47612.2022.9981835 10.1109/ICRA.2019.8793511 10.1109/ICRA46639.2022.9811842 10.1007/s12555-020-0443-2 10.1007/978-3-030-66498-5_24 10.1109/ICRA46639.2022.9812027 10.1109/ICRA.2017.7989236 10.1109/IROS.2018.8594299 10.1109/TNNLS.2022.3176677 10.1109/JSEN.2019.2947446 10.1609/aaai.v31i1.11215 10.1109/ICCV48922.2021.00299 10.1109/CVPR.2018.00781 10.1177/0020294019858217 10.1109/LRA.2021.3095515 10.1109/CVPR.2016.90 10.1007/978-3-030-58568-6_42 10.1016/j.measurement.2022.111030 10.1109/TIV.2023.3273288 10.1109/TRO.2022.3141876 10.1109/CVPR52688.2022.00475 10.1109/IROS40897.2019.8967880 10.1109/ICRA48506.2021.9560947 10.1109/TPAMI.2023.3275156/mm1 10.15607/RSS.2009.V.021 10.1109/JIOT.2022.3151629 10.1371/journal.pone.0261053 10.1109/IROS40897.2019.8967762 10.5194/isprs-annals-V-1-2021-47-2021 10.1109/ICRA40945.2020.9197366 10.1109/LRA.2021.3064227 10.1109/CVPR.2012.6248074 10.1109/JSEN.2021.3128683 10.1016/j.patcog.2020.107618 10.1109/34.121791 10.1109/JSEN.2022.3208200 10.1109/ICCV48922.2021.00041 10.1109/CVPR.2019.00589 10.15607/RSS.2014.X.007 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 Distributed under a Creative Commons Attribution 4.0 International License |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 – notice: Distributed under a Creative Commons Attribution 4.0 International License |
| DBID | 97E RIA RIE AAYXX CITATION 7SP 7U5 8FD L7M 1XC |
| DOI | 10.1109/JSEN.2023.3302401 |
| DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Electronics & Communications Abstracts Solid State and Superconductivity Abstracts Technology Research Database Advanced Technologies Database with Aerospace Hyper Article en Ligne (HAL) |
| DatabaseTitle | CrossRef Solid State and Superconductivity Abstracts Technology Research Database Advanced Technologies Database with Aerospace Electronics & Communications Abstracts |
| DatabaseTitleList | Solid State and Superconductivity Abstracts |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Geography Engineering Computer Science |
| EISSN | 1558-1748 |
| EndPage | 1 |
| ExternalDocumentID | oai:HAL:hal-04745599v1 10_1109_JSEN_2023_3302401 10214516 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: JST-SPRING grantid: JPMJSP2124 – fundername: JSPS KAKENHI grantid: 23H03426 |
| GroupedDBID | -~X 0R~ 29I 4.4 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AGQYO AHBIQ AJQPL AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 EBS F5P HZ~ IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS TWZ AAYXX CITATION 7SP 7U5 8FD L7M 1XC |
| ID | FETCH-LOGICAL-c394t-ee0df17d4dba99cff7a5bbd9300dff962a1cf8c4195edd87a2040b84744180413 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 16 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001090399700160&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1530-437X |
| IngestDate | Tue Oct 14 20:21:22 EDT 2025 Mon Jun 30 10:09:52 EDT 2025 Tue Nov 18 20:45:57 EST 2025 Sat Nov 29 06:39:39 EST 2025 Mon Aug 04 05:48:55 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 18 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 Distributed under a Creative Commons Attribution 4.0 International License: http://creativecommons.org/licenses/by/4.0 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c394t-ee0df17d4dba99cff7a5bbd9300dff962a1cf8c4195edd87a2040b84744180413 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-0433-9832 0000-0002-0277-3467 0000-0001-6123-9339 |
| PQID | 2865090590 |
| PQPubID | 75733 |
| PageCount | 1 |
| ParticipantIDs | hal_primary_oai_HAL_hal_04745599v1 proquest_journals_2865090590 crossref_primary_10_1109_JSEN_2023_3302401 ieee_primary_10214516 crossref_citationtrail_10_1109_JSEN_2023_3302401 |
| PublicationCentury | 2000 |
| PublicationDate | 2023-09-15 |
| PublicationDateYYYYMMDD | 2023-09-15 |
| PublicationDate_xml | – month: 09 year: 2023 text: 2023-09-15 day: 15 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | IEEE sensors journal |
| PublicationTitleAbbrev | JSEN |
| PublicationYear | 2023 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Institute of Electrical and Electronics Engineers |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) – name: Institute of Electrical and Electronics Engineers |
| References | ref13 chen (ref29) 2020 ref12 ref56 ref59 ref14 ref58 ref53 ref52 ref55 ref10 ref54 ref16 kim (ref42) 2021 ref19 ref18 dosovitskiy (ref57) 2020 wang (ref35) 2022 wang (ref11) 2020; 33 ref51 ref50 ref45 ref48 zhang (ref77) 2021 ref47 ref41 ref44 ref43 huang (ref80) 2023 vaswani (ref23) 2017; 30 ref49 ref8 ref7 ref9 ref4 ref3 ref6 xu (ref22) 2023 ref5 ref40 ref79 cho (ref15) 2019 ref34 ref78 ref37 cao (ref17) 2020 ref36 kendall (ref69) 2017; 30 ref31 ref75 ref30 ref74 ref33 ref32 liu (ref76) 2021; 34 ref2 ref1 ref39 ref38 chen (ref64) 2022 ref71 ref70 ref73 ref72 hassani (ref46) 2021 ref24 ref68 ref67 ref26 ref25 ref20 ref63 ref66 ref21 ref65 ref28 ref27 ref60 ref62 ref61 |
| References_xml | – start-page: 5583 year: 2021 ident: ref42 article-title: ViLT: Vision-and-language transformer without convolution or region supervision publication-title: Proc Int Conf Mach Learn (ICML) – year: 2019 ident: ref15 article-title: DeepLO: Geometry-aware deep LiDAR odometry publication-title: arXiv 1902 10562 Icsl – ident: ref39 doi: 10.1109/IROS47612.2022.9981107 – ident: ref41 doi: 10.1109/ICRA48506.2021.9561149 – ident: ref18 doi: 10.1109/TRO.2018.2853729 – ident: ref79 doi: 10.1109/LRA.2022.3141661 – ident: ref65 doi: 10.1109/TCE.2003.1233761 – ident: ref51 doi: 10.1109/CVPR.2019.00867 – ident: ref60 doi: 10.1109/CVPR46437.2021.00700 – year: 2020 ident: ref17 article-title: Fast monocular visual odometry for augmented reality on smartphones publication-title: IEEE Consum Electron Mag – ident: ref71 doi: 10.1109/ICIP40778.2020.9190664 – ident: ref16 doi: 10.1109/ISMAR.2014.6948420 – ident: ref55 doi: 10.1109/LRA.2020.3003256 – ident: ref56 doi: 10.1021/ac60214a047 – ident: ref74 doi: 10.3390/rs14205229 – ident: ref27 doi: 10.1109/TRO.2017.2705103 – ident: ref66 doi: 10.1109/ICCV.2015.336 – ident: ref8 doi: 10.1109/JSEN.2019.2910826 – ident: ref24 doi: 10.1109/CVPR52729.2023.02174 – ident: ref49 doi: 10.1007/978-3-031-02444-3_14 – ident: ref63 doi: 10.1109/CVPR52688.2022.01178 – volume: 33 start-page: 4835 year: 2020 ident: ref11 article-title: Deep multimodal fusion by channel exchanging publication-title: Proc Adv Neural Inf Process Syst – ident: ref10 doi: 10.1016/j.neunet.2022.09.001 – ident: ref68 doi: 10.1109/ICCV48922.2021.00273 – ident: ref44 doi: 10.1109/CVPR52688.2022.01187 – ident: ref14 doi: 10.1109/ROBIO55434.2022.10011808 – ident: ref3 doi: 10.1109/CVPR.2019.01079 – ident: ref6 doi: 10.1109/ICASSP39728.2021.9413912 – ident: ref19 doi: 10.1109/IROS45743.2020.9341176 – ident: ref54 doi: 10.1109/JSEN.2020.3028561 – ident: ref78 doi: 10.1109/TAES.2022.3193085 – ident: ref2 doi: 10.1109/IROS47612.2022.9981835 – ident: ref38 doi: 10.1109/ICRA.2019.8793511 – ident: ref45 doi: 10.1109/ICRA46639.2022.9811842 – ident: ref4 doi: 10.1007/s12555-020-0443-2 – year: 2021 ident: ref46 article-title: Escaping the big data paradigm with compact transformers publication-title: arXiv 2104 05704 – ident: ref52 doi: 10.1007/978-3-030-66498-5_24 – volume: 30 start-page: 1 year: 2017 ident: ref23 article-title: Attention is all you need publication-title: Proc Adv Neural Inf Process Syst – ident: ref43 doi: 10.1109/ICRA46639.2022.9812027 – ident: ref67 doi: 10.1109/ICRA.2017.7989236 – ident: ref32 doi: 10.1109/IROS.2018.8594299 – ident: ref21 doi: 10.1109/TNNLS.2022.3176677 – ident: ref53 doi: 10.1109/JSEN.2019.2947446 – ident: ref7 doi: 10.1609/aaai.v31i1.11215 – ident: ref61 doi: 10.1109/ICCV48922.2021.00299 – ident: ref72 doi: 10.1109/CVPR.2018.00781 – year: 2020 ident: ref57 article-title: An image is worth 16 × 16 words: Transformers for image recognition at scale publication-title: arXiv 2010 11929 – ident: ref50 doi: 10.1177/0020294019858217 – ident: ref40 doi: 10.1109/LRA.2021.3095515 – year: 2020 ident: ref29 article-title: A survey on deep learning for localization and mapping: Towards the age of spatial machine intelligence publication-title: arXiv 2006 12567 – volume: 30 start-page: 1 year: 2017 ident: ref69 article-title: What uncertainties do we need in Bayesian deep learning for computer vision? publication-title: Proc Adv Neural Inf Process Syst – volume: 34 start-page: 23818 year: 2021 ident: ref76 article-title: Efficient training of visual transformers with small datasets publication-title: Proc Adv Neural Inf Process Syst – ident: ref47 doi: 10.1109/CVPR.2016.90 – ident: ref70 doi: 10.1007/978-3-030-58568-6_42 – ident: ref9 doi: 10.1016/j.measurement.2022.111030 – ident: ref34 doi: 10.1109/TIV.2023.3273288 – ident: ref37 doi: 10.1109/TRO.2022.3141876 – ident: ref58 doi: 10.1109/CVPR52688.2022.00475 – ident: ref33 doi: 10.1109/IROS40897.2019.8967880 – ident: ref73 doi: 10.1109/ICRA48506.2021.9560947 – year: 2023 ident: ref22 article-title: Multimodal learning with transformers: A survey publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2023.3275156/mm1 – ident: ref31 doi: 10.15607/RSS.2009.V.021 – year: 2022 ident: ref35 article-title: D-LIOM: Tightly-coupled direct LiDAR-inertial odometry and mapping publication-title: IEEE Trans Multimedia – ident: ref26 doi: 10.1109/JIOT.2022.3151629 – ident: ref20 doi: 10.1371/journal.pone.0261053 – year: 2022 ident: ref64 article-title: CF-ViT: A general coarse-to-fine method for vision transformer publication-title: arXiv 2203 03821 – ident: ref48 doi: 10.1109/IROS40897.2019.8967762 – ident: ref5 doi: 10.5194/isprs-annals-V-1-2021-47-2021 – ident: ref59 doi: 10.1109/ICRA40945.2020.9197366 – ident: ref36 doi: 10.1109/LRA.2021.3064227 – ident: ref13 doi: 10.1109/CVPR.2012.6248074 – ident: ref25 doi: 10.1109/JSEN.2021.3128683 – ident: ref12 doi: 10.1016/j.patcog.2020.107618 – start-page: 22690 year: 2023 ident: ref80 article-title: Vision transformer with super token sampling publication-title: Proc Conf Comput Vis Pattern Recognit (CVPR) – ident: ref30 doi: 10.1109/34.121791 – year: 2021 ident: ref77 article-title: MixUp training leads to reduced overfitting and improved calibration for the transformer architecture publication-title: arXiv 2102 11402 – ident: ref1 doi: 10.1109/JSEN.2022.3208200 – ident: ref62 doi: 10.1109/ICCV48922.2021.00041 – ident: ref75 doi: 10.1109/CVPR.2019.00589 – ident: ref28 doi: 10.15607/RSS.2014.X.007 |
| SSID | ssj0019757 |
| Score | 2.4738455 |
| Snippet | Multi-modal fusion of sensors is a commonly used approach to enhance the performance of odometry estimation, which is also a fundamental module for mobile... Multimodal fusion of sensors is a commonly used approach to enhance the performance of odometry estimation, which is also a fundamental module for mobile... |
| SourceID | hal proquest crossref ieee |
| SourceType | Open Access Repository Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 1 |
| SubjectTerms | Ablation Attention mechanisms Computer Science Datasets Estimation Inertial fusion (reactor) Laser radar Learning Lidar LiDAR-inertial odometry Modules multi-modal learning Multisensor fusion Odometry Robotics sensor data fusion Sensor fusion Sensors Strategy Task analysis transformer Vertex & Normal Transformers |
| Title | TransFusionOdom: Transformer-based LiDAR-Inertial Fusion Odometry Estimation |
| URI | https://ieeexplore.ieee.org/document/10214516 https://www.proquest.com/docview/2865090590 https://hal.science/hal-04745599 |
| Volume | 23 |
| WOSCitedRecordID | wos001090399700160&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1558-1748 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0019757 issn: 1530-437X databaseCode: RIE dateStart: 20010101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3fa9swED6WUtj60HZdR9Mfw4w9DZRKsWVZfQttQjdCNrYO8mYs6UQLa1LSJJD_vjrZCR1jhb0Z-SSEP0ufftzdB_ApkwaFtAXz1nmWVdKxqlKOCeUDP0jj08xGsQk1GhXjsf7eBKvHWBhEjM5n2KHHeJfvpnZBR2XnUYZairwFLaXyOlhrc2WgVUzrGUYwZ1mqxs0VpuD6_OvP_qhDOuGdsHsPFCb-IKHWLblARm2VvybkyDKDvf_s3z7sNsvJpFfj_xZe4eQAdp4lGTyA143O-e3qHQwjNQ0WdET2zU3vL5Kb9cIVZ4wYzSXDu6veD_ZlQg7XoenaOCFrnM9WST_MCXW44yH8GvRvLq9Zo6fAbKqzOUPkzgvlMmcqra33qpLGOJ3yUO513q2E9YXNhJboXKGqbhjhJtBXWDJRmqL0PWxNphM8ggSzqihIOd0Eeg8vTc67zhddlKilybENfP2BS9skGyfNi99l3HRwXRImJWFSNpi04fOmykOdaeMl448BtY0d5ci-7g1LKuOhv5RGbRmMDgmjZ63V8LThdI1y2YzYx5IidLmmUNzjf1Q7gTfUBXIWEfIUtuazBZ7Btl3O7x5nH-LP-AQaUtq8 |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3fixMxEB68Uzh98Md5YvXURXwS0kt2k-7Gt6ItPV2raIW-hU0y4Q60lV57cP-9mey2nIiCb0t2EsJOki-bzHwfwCupLArlKhacD0w2yrOmKT0TZYj4oGwopEtiE-V0Ws3n-nOXrJ5yYRAxBZ9hnx7TXb5fug0dlZ0kGWolBntwU0mZ8zZda3dpoMtE7BnnMGeyKOfdJabg-uT919G0T0rh_fj_HkFM_AZDe2cUBJnUVf5YkhPOjO_9Zw_vw91uQ5kN2xHwAG7g4hDuXKMZPISDTun87Ooh1Amcxhs6JPvklz_eZLPt1hVXjDDNZ_X5u-EXdrqgkOvYdGuckTWuV1fZKK4KbcLjEXwbj2ZvJ6xTVGCu0HLNELkPovTS20ZrF0LZKGu9LngsD3qQN8KFykmhFXpflU0e57iNABY3TURUVDyC_cVygY8hQ9lUFWmn2wjw8aUd8NyHKkeFWtkB9oBvP7BxHd04qV58N-m3g2tDPjHkE9P5pAevd1V-tlwb_zJ-Gb22syOW7MmwNlTGY3-JSO0yGh2Rj6611rqnB8dbL5tuzl4YytHlmpJxn_yl2gs4mMw-1qY-nX54CrepOxQ6ItQx7K9XG3wGt9zl-vxi9TwNzF-knN4D |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=TransFusionOdom%3A+Transformer-Based+LiDAR-Inertial+Fusion+Odometry+Estimation&rft.jtitle=IEEE+sensors+journal&rft.au=Sun%2C+Leyuan&rft.au=Ding%2C+Guanqun&rft.au=Qiu%2C+Yue&rft.au=Yoshiyasu%2C+Yusuke&rft.date=2023-09-15&rft.pub=Institute+of+Electrical+and+Electronics+Engineers&rft.issn=1530-437X&rft.volume=23&rft.issue=18&rft.spage=22064&rft.epage=22079&rft_id=info:doi/10.1109%2FJSEN.2023.3302401&rft.externalDBID=HAS_PDF_LINK&rft.externalDocID=oai%3AHAL%3Ahal-04745599v1 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1530-437X&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1530-437X&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1530-437X&client=summon |