Toward Robust LiDAR-Camera Fusion in BEV Space via Mutual Deformable Attention and Temporal Aggregation
LiDAR and camera are two critical sensors that can provide complementary information for accurate 3D object detection. Most works are devoted to improving the detection performance of fusion models on the clean and well-collected datasets. However, the collected point clouds and images in real scena...
Uloženo v:
| Vydáno v: | IEEE transactions on circuits and systems for video technology Ročník 34; číslo 7; s. 5753 - 5764 |
|---|---|
| Hlavní autoři: | , , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
New York
IEEE
01.07.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Témata: | |
| ISSN: | 1051-8215, 1558-2205 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | LiDAR and camera are two critical sensors that can provide complementary information for accurate 3D object detection. Most works are devoted to improving the detection performance of fusion models on the clean and well-collected datasets. However, the collected point clouds and images in real scenarios may be corrupted to various degrees due to potential sensor malfunctions, which greatly affects the robustness of the fusion model and poses a threat to safe deployment. In this paper, we first analyze the shortcomings of most fusion detectors, which rely mainly on the LiDAR branch, and the potential of the bird's eye-view (BEV) paradigm in dealing with partial sensor failures. Based on that, we present a robust LiDAR-camera fusion pipeline in unified BEV space with two novel designs under four typical LiDAR-camera malfunction cases. Specifically, a mutual deformable attention is proposed to dynamically model the spatial feature relationship and reduce the interference caused by the corrupted modality, and a temporal aggregation module is devised to fully utilize the rich information in the temporal domain. Together with the decoupled feature extraction for each modality and holistic BEV space fusion, the proposed detector, termed RobBEV, can work stably regardless of single-modality data corruption. Extensive experiments on the large-scale nuScenes dataset under robust settings demonstrate the effectiveness of our approach. |
|---|---|
| AbstractList | LiDAR and camera are two critical sensors that can provide complementary information for accurate 3D object detection. Most works are devoted to improving the detection performance of fusion models on the clean and well-collected datasets. However, the collected point clouds and images in real scenarios may be corrupted to various degrees due to potential sensor malfunctions, which greatly affects the robustness of the fusion model and poses a threat to safe deployment. In this paper, we first analyze the shortcomings of most fusion detectors, which rely mainly on the LiDAR branch, and the potential of the bird’s eye-view (BEV) paradigm in dealing with partial sensor failures. Based on that, we present a robust LiDAR-camera fusion pipeline in unified BEV space with two novel designs under four typical LiDAR-camera malfunction cases. Specifically, a mutual deformable attention is proposed to dynamically model the spatial feature relationship and reduce the interference caused by the corrupted modality, and a temporal aggregation module is devised to fully utilize the rich information in the temporal domain. Together with the decoupled feature extraction for each modality and holistic BEV space fusion, the proposed detector, termed RobBEV, can work stably regardless of single-modality data corruption. Extensive experiments on the large-scale nuScenes dataset under robust settings demonstrate the effectiveness of our approach. |
| Author | Li, Fan Wang, Jian Sun, Hongbin Zhang, Xuchong An, Yi |
| Author_xml | – sequence: 1 givenname: Jian orcidid: 0000-0002-4091-2165 surname: Wang fullname: Wang, Jian email: wj851329121@stu.xjtu.edu.cn organization: Shaanxi Key Laboratory of Deep Space Exploration Intelligent Information Technology, School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, China – sequence: 2 givenname: Fan orcidid: 0000-0002-7566-1634 surname: Li fullname: Li, Fan email: lifan@mail.xjtu.edu.cn organization: Shaanxi Key Laboratory of Deep Space Exploration Intelligent Information Technology, School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, China – sequence: 3 givenname: Yi surname: An fullname: An, Yi email: an_yi31@stu.xjtu.edu.cn organization: Shaanxi Key Laboratory of Deep Space Exploration Intelligent Information Technology, School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, China – sequence: 4 givenname: Xuchong orcidid: 0000-0003-2772-2700 surname: Zhang fullname: Zhang, Xuchong email: zhangxc0329@mail.xjtu.edu.cn organization: College of Artificial Intelligence, Xi'an Jiaotong University, Xi'an, China – sequence: 5 givenname: Hongbin orcidid: 0000-0003-2153-2906 surname: Sun fullname: Sun, Hongbin email: hsun@mail.xjtu.edu.cn organization: College of Artificial Intelligence, Xi'an Jiaotong University, Xi'an, China |
| BookMark | eNp9kE1PwkAQhjcGEwH9A8bDJp6L-9luj8iHmmBMoHJtljJLSqCL263Gf-8WOBgPzmUmmfedN_P0UKeyFSB0S8mAUpI-ZKPFMhswwsSA8ziUuEBdKqWKGCOyE2YiaaQYlVeoV9dbQqhQIumiTWa_tFvjuV01tcezcjycRyO9B6fxtKlLW-Gywo-TJV4cdAH4s9T4tfGN3uExGOv2erUDPPQeKt-KdbXGGewP1gXFcLNxsNHt4hpdGr2r4ebc--h9OslGz9Hs7ellNJxFBUtjH3GVKENJnIpU85U2AKvEMLWOQ5cmjjktwkPKKGGoTCEhUjJqFPBY8DUzjPfR_enuwdmPBmqfb23jqhCZc5JIwQIbElTspCqcrWsHJj-4cq_dd05J3gLNj0DzFmh-BhpM6o-pKP3xOe90ufvfeneylgDwK0twJRTnP-ythOw |
| CODEN | ITCTEM |
| CitedBy_id | crossref_primary_10_1111_exsy_13615 crossref_primary_10_4018_JOEUC_344452 crossref_primary_10_4018_JOEUC_355709 crossref_primary_10_1109_TCSVT_2025_3527235 crossref_primary_10_4018_JOEUC_349987 crossref_primary_10_1109_TCSVT_2024_3486019 crossref_primary_10_4018_JOEUC_348654 crossref_primary_10_1111_exsy_13650 crossref_primary_10_4018_JOEUC_346230 crossref_primary_10_1109_TIP_2025_3526056 crossref_primary_10_1016_j_displa_2024_102857 crossref_primary_10_3389_fpsyg_2024_1459446 crossref_primary_10_1109_TCSVT_2024_3523049 crossref_primary_10_4018_JOEUC_350224 crossref_primary_10_1109_TCSVT_2025_3556711 crossref_primary_10_1109_JSTARS_2025_3540925 crossref_primary_10_4018_JOEUC_347914 crossref_primary_10_1109_TCSVT_2025_3554182 crossref_primary_10_1109_TGRS_2024_3476089 crossref_primary_10_1364_AO_565799 crossref_primary_10_3390_s24206766 crossref_primary_10_4018_JOEUC_354707 crossref_primary_10_1109_JIOT_2024_3481236 crossref_primary_10_1038_s41598_024_76915_8 crossref_primary_10_1109_TCSVT_2025_3525725 crossref_primary_10_4018_JOEUC_355765 crossref_primary_10_3389_fpls_2024_1452502 crossref_primary_10_3389_fenvs_2024_1437644 crossref_primary_10_1016_j_imavis_2025_105484 crossref_primary_10_4018_JOEUC_351242 crossref_primary_10_4018_JOEUC_345929 crossref_primary_10_3389_fpsyt_2024_1418969 crossref_primary_10_4018_JOEUC_345925 crossref_primary_10_1109_TITS_2024_3431671 crossref_primary_10_3389_fenrg_2024_1364445 crossref_primary_10_4018_JOEUC_345245 crossref_primary_10_3390_rs16162888 crossref_primary_10_3389_fnut_2024_1454466 crossref_primary_10_1109_TCSVT_2024_3492289 crossref_primary_10_1109_TCSVT_2025_3557950 crossref_primary_10_1109_TCYB_2024_3455760 crossref_primary_10_1109_JSEN_2024_3493952 crossref_primary_10_4018_JOEUC_347217 crossref_primary_10_3390_s24237702 crossref_primary_10_3389_fnins_2024_1380886 crossref_primary_10_3390_wevj16060306 crossref_primary_10_4018_JOEUC_354413 crossref_primary_10_4018_JOEUC_358939 crossref_primary_10_4018_JOEUC_357249 crossref_primary_10_3390_rs16173184 crossref_primary_10_3389_fenrg_2024_1418907 crossref_primary_10_3390_electronics13153071 crossref_primary_10_1016_j_heliyon_2024_e37229 crossref_primary_10_4018_JOEUC_370005 crossref_primary_10_3390_jimaging11090319 crossref_primary_10_4018_JOEUC_350096 crossref_primary_10_4018_JOEUC_358454 |
| Cites_doi | 10.1109/CVPRW53098.2021.00023 10.1109/CVPR52688.2022.00116 10.1109/CVPR.2019.00752 10.1109/ITSC48978.2021.9564951 10.1109/tcsvt.2023.3296583 10.1109/CVPR46437.2021.01162 10.1007/978-3-030-58583-9_43 10.1007/978-3-031-20077-9_1 10.1007/978-3-030-58568-6_12 10.1109/cvpr52729.2023.02069 10.48550/ARXIV.1706.03762 10.1109/CVPR52729.2023.02073 10.1109/CVPR52729.2023.01712 10.3390/s18103337 10.1109/CVPR42600.2020.00466 10.1109/TCSVT.2023.3260115 10.1109/TCSVT.2023.3248656 10.1007/978-3-030-58452-8_13 10.1109/TIV.2016.2578706 10.1109/CVPR52688.2022.01838 10.1109/TCSVT.2021.3082763 10.1109/CVPR42600.2020.01105 10.1007/978-3-030-01270-0_39 10.1609/aaai.v37i2.25233 10.1109/CVPR46437.2021.01161 10.1109/TCSVT.2023.3237579 10.1109/ICCV51070.2023.00637 10.1109/CVPR42600.2020.01164 10.1109/CVPR.2018.00472 10.1109/CVPRW59228.2023.00321 10.1109/ICRA48891.2023.10160968 10.1109/CVPRW.2019.00158 10.1109/CVPR52688.2022.01667 10.1109/IROS.2018.8594049 10.1109/ICRA.2019.8794195 10.1109/CVPR52688.2022.01588 10.1109/CVPR52729.2023.01710 10.1109/TCSVT.2022.3197212 10.24963/ijcai.2022/116 10.1109/CVPR.2019.00086 10.1109/tcsvt.2023.3268849 10.1109/CVPR.2019.01298 10.1109/CVPR.2018.00102 10.1109/CVPR.2017.691 10.1109/TCSVT.2021.3100848 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
| DOI | 10.1109/TCSVT.2024.3366664 |
| DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Technology Research Database |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EISSN | 1558-2205 |
| EndPage | 5764 |
| ExternalDocumentID | 10_1109_TCSVT_2024_3366664 10438483 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: Key Research and Development Project in Shaanxi Province grantid: 2023-ZDLNY-65 – fundername: National Key Research and Development Program of China grantid: 2022ZD0115803 – fundername: Natural Science Basic Research Plan in Shaanxi Province of China grantid: 2023-JC-JQ-51 |
| GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c296t-3878f106949a3bafeeb7f28d6eb75f6631c2208f84f159e705521f8e3643d2f23 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 82 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001263608800010&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1051-8215 |
| IngestDate | Mon Jun 30 10:23:14 EDT 2025 Sat Nov 29 01:44:27 EST 2025 Tue Nov 18 22:38:11 EST 2025 Wed Aug 27 01:58:40 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 7 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c296t-3878f106949a3bafeeb7f28d6eb75f6631c2208f84f159e705521f8e3643d2f23 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0003-2772-2700 0000-0003-2153-2906 0000-0002-4091-2165 0000-0002-7566-1634 |
| PQID | 3075426660 |
| PQPubID | 85433 |
| PageCount | 12 |
| ParticipantIDs | crossref_citationtrail_10_1109_TCSVT_2024_3366664 proquest_journals_3075426660 crossref_primary_10_1109_TCSVT_2024_3366664 ieee_primary_10438483 |
| PublicationCentury | 2000 |
| PublicationDate | 2024-07-01 |
| PublicationDateYYYYMMDD | 2024-07-01 |
| PublicationDate_xml | – month: 07 year: 2024 text: 2024-07-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | IEEE transactions on circuits and systems for video technology |
| PublicationTitleAbbrev | TCSVT |
| PublicationYear | 2024 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref57 ref12 Mao (ref18) 2022 ref58 ref52 ref55 ref10 ref54 Zhu (ref59) 2020 ref17 ref16 ref19 Xie (ref14) 2023 ref50 Contributors (ref60) 2020 ref46 ref45 ref42 ref41 Qi (ref49) ref44 ref43 Huang (ref53) 2022 Xie (ref15) ref8 ref7 ref4 ref3 ref6 ref40 ref35 ref34 Zhu (ref47) 2019 ref37 ref36 ref31 ref30 Wang (ref56); 164 ref33 Qi (ref48) ref32 ref2 ref1 ref39 ref38 Sun (ref11) Liang (ref9); 35 Ma (ref5) 2022 ref24 ref23 ref26 ref25 ref20 Huang (ref51) 2021 ref22 ref21 ref28 ref27 Yin (ref29) |
| References_xml | – ident: ref4 doi: 10.1109/CVPRW53098.2021.00023 – ident: ref31 doi: 10.1109/CVPR52688.2022.00116 – ident: ref35 doi: 10.1109/CVPR.2019.00752 – year: 2022 ident: ref5 article-title: Vision-centric BEV perception: A survey publication-title: arXiv:2208.02797 – volume-title: Proc. Int. Conf. Learn. Represent. Workshop Scene Represent. Auton. Driving ident: ref15 article-title: Benchmarking bird’s eye view detection robustness to real-world corruptions – ident: ref28 doi: 10.1109/ITSC48978.2021.9564951 – ident: ref3 doi: 10.1109/tcsvt.2023.3296583 – ident: ref23 doi: 10.1109/CVPR46437.2021.01162 – ident: ref33 doi: 10.1007/978-3-030-58583-9_43 – year: 2023 ident: ref14 article-title: On the adversarial robustness of camera-based 3D object detection publication-title: arXiv:2301.10766 – ident: ref54 doi: 10.1007/978-3-031-20077-9_1 – ident: ref50 doi: 10.1007/978-3-030-58568-6_12 – ident: ref16 doi: 10.1109/cvpr52729.2023.02069 – ident: ref57 doi: 10.48550/ARXIV.1706.03762 – ident: ref26 doi: 10.1109/CVPR52729.2023.02073 – ident: ref6 doi: 10.1109/CVPR52729.2023.01712 – start-page: 652 volume-title: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) ident: ref48 article-title: PointNet: Deep learning on point sets for 3D classification and segmentation – ident: ref42 doi: 10.3390/s18103337 – ident: ref27 doi: 10.1109/CVPR42600.2020.00466 – ident: ref20 doi: 10.1109/TCSVT.2023.3260115 – ident: ref45 doi: 10.1109/TCSVT.2023.3248656 – ident: ref58 doi: 10.1007/978-3-030-58452-8_13 – ident: ref1 doi: 10.1109/TIV.2016.2578706 – volume: 35 start-page: 10421 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref9 article-title: BEVFusion: A simple and robust LiDAR-camera fusion framework – ident: ref40 doi: 10.1109/CVPR52688.2022.01838 – ident: ref19 doi: 10.1109/TCSVT.2021.3082763 – ident: ref39 doi: 10.1109/CVPR42600.2020.01105 – ident: ref34 doi: 10.1007/978-3-030-01270-0_39 – ident: ref52 doi: 10.1609/aaai.v37i2.25233 – ident: ref44 doi: 10.1109/CVPR46437.2021.01161 – year: 2022 ident: ref53 article-title: BEVDet4D: Exploit temporal cues in multi-camera 3D object detection publication-title: arXiv:2203.17054 – year: 2022 ident: ref18 article-title: 3D object detection for autonomous driving: A comprehensive survey publication-title: arXiv:2206.09474 – year: 2020 ident: ref59 article-title: Deformable DETR: Deformable transformers for end-to-end object detection publication-title: arXiv:2010.04159 – volume-title: MMDetection3D: OpenMMLab Next-generation Platform for General 3D Object Detection year: 2020 ident: ref60 – ident: ref2 doi: 10.1109/TCSVT.2023.3237579 – year: 2021 ident: ref51 article-title: BEVDet: High-performance multi-camera 3D object detection in bird-eye-view publication-title: arXiv:2112.11790 – start-page: 1 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref49 article-title: PointNet++: Deep hierarchical feature learning on point sets in a metric space – ident: ref7 doi: 10.1109/ICCV51070.2023.00637 – ident: ref8 doi: 10.1109/CVPR42600.2020.01164 – start-page: 877 volume-title: Proc. 29th USENIX Secur. Symp. (USENIX Secur.) ident: ref11 article-title: Towards robust LiDAR-based perception in autonomous driving: General black-box adversarial sensor attack and countermeasures – volume: 164 start-page: 180 volume-title: Proc. Conf. Robot Learn. ident: ref56 article-title: DETR3D: 3D object detection from multi-view images via 3D-to-2D queries – ident: ref41 doi: 10.1109/CVPR.2018.00472 – ident: ref13 doi: 10.1109/CVPRW59228.2023.00321 – ident: ref10 doi: 10.1109/ICRA48891.2023.10160968 – ident: ref30 doi: 10.1109/CVPRW.2019.00158 – ident: ref25 doi: 10.1109/CVPR52688.2022.01667 – ident: ref37 doi: 10.1109/IROS.2018.8594049 – ident: ref22 doi: 10.1109/ICRA.2019.8794195 – ident: ref12 doi: 10.1109/CVPR52688.2022.01588 – ident: ref55 doi: 10.1109/CVPR52729.2023.01710 – ident: ref17 doi: 10.1109/TCSVT.2022.3197212 – ident: ref24 doi: 10.24963/ijcai.2022/116 – ident: ref38 doi: 10.1109/CVPR.2019.00086 – ident: ref46 doi: 10.1109/tcsvt.2023.3268849 – ident: ref43 doi: 10.1109/CVPR.2019.01298 – start-page: 16494 volume-title: Proc. Annu. Conf. Neural Inf. Process. Syst. (NeurIPS) ident: ref29 article-title: Multimodal virtual point 3D detection – ident: ref32 doi: 10.1109/CVPR.2018.00102 – ident: ref36 doi: 10.1109/CVPR.2017.691 – year: 2019 ident: ref47 article-title: Class-balanced grouping and sampling for point cloud 3D object detection publication-title: arXiv:1908.09492 – ident: ref21 doi: 10.1109/TCSVT.2021.3100848 |
| SSID | ssj0014847 |
| Score | 2.6529272 |
| Snippet | LiDAR and camera are two critical sensors that can provide complementary information for accurate 3D object detection. Most works are devoted to improving the... |
| SourceID | proquest crossref ieee |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 5753 |
| SubjectTerms | 3D object detection Cameras Datasets Detection algorithms Effectiveness Feature extraction Formability Laser radar Lidar LiDAR-camera fusion Malfunctions model robustness Object detection Object recognition Robustness Sensors Three-dimensional displays |
| Title | Toward Robust LiDAR-Camera Fusion in BEV Space via Mutual Deformable Attention and Temporal Aggregation |
| URI | https://ieeexplore.ieee.org/document/10438483 https://www.proquest.com/docview/3075426660 |
| Volume | 34 |
| WOSCitedRecordID | wos001263608800010&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1558-2205 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014847 issn: 1051-8215 databaseCode: RIE dateStart: 19910101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PT4MwFG7c4kEP_pxxOk0P3gwTSgflOPcjHuZiNlx2I4W2C8nCzAb7--0rbNEYTTzBgQLho6_fe33vewg9dLgdC-ILS9s916JBrCzt2TJLEJ4EIB_uG53u2cgfj9l8HrxVxeqmFkZKaZLPZBtOzV6-WCUFhMr0DKcuo8ytoZrv-2Wx1n7LgDLTTUzzBcdieiHbVcjYwVPYm85C7QsS2nZdzdc9-m0VMm1Vfthis8AMT__5amfopGKSuFtCf44OZHaBjr_oC16iRWiSYvFkFRebHI_Sfndi9TjEofCwgDgZTjP8PJjhqXadJd6mHL8WUFGC-9Kw2XgpcTfPy5xIzDOBw1LLSj94oV31hQG2gd6Hg7D3YlWdFayEBF4OgrpMOVDzGnA35krK2FeECU8fO0qTECchxGaKUaXpjgTFHeIoJl3NXwRRxL1C9WyVyWuEHQhLSZjI0qZcOy8dm0PZoQf2QrC4iZzdl46SSnYcul8sI-N-2EFk0IkAnahCp4ke92M-StGNP69uAB5friyhaKLWDtGompibSJu0DpASz775ZdgtOoK7lym5LVTP14W8Q4fJNk8363vzz30CPZvQ2Q |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3fS8MwED50CuqDPyfOn3nwTaptmnbp49wcinPIVsfeStomYyCduHZ_v7m0E0UUfGofGlL6NZfvLnffAVx6wo5T2kwtbfdciwWxsrRny62UiiRA-fCm0eke9Zr9Ph-Pg-eqWN3UwkgpTfKZvMZbc5afzpICQ2V6hTOXM-6uwprHGHXKcq3PQwPGTT8xzRgci-utbFkjYwc3YXs4CrU3SNm162rG7rNv-5BprPLDGpstprvzz5fbhe2KS5JWCf4erMhsH7a-KAwewCQ0abFkMIuLeU56005rYLUFRqJIt8BIGZlm5PZuRIbaeZZkMRXkqcCaEtKRhs_Gr5K08rzMiiQiS0lYqlnpiSfaWZ8YaOvw0r0L2_dW1VvBSmjg5yipy5WDVa-BcGOhpIybivLU11dPaRriJJTaXHGmNOGRqLlDHcWlqxlMShV1D6GWzTJ5BMTBwJTEpSxtJrT74tkCCw99tBgpjxvgLL90lFTC49j_4jUyDogdRAadCNGJKnQacPU55q2U3fjz6Tri8eXJEooGnC4RjaqlOY-0UfOQlvj28S_DLmDjPnzqRb2H_uMJbOJMZYLuKdTy90KewXqyyKfz93Pz_30ARg7UIA |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Toward+Robust+LiDAR-Camera+Fusion+in+BEV+Space+via+Mutual+Deformable+Attention+and+Temporal+Aggregation&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Wang%2C+Jian&rft.au=Li%2C+Fan&rft.au=An%2C+Yi&rft.au=Zhang%2C+Xuchong&rft.date=2024-07-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=34&rft.issue=7&rft.spage=5753&rft_id=info:doi/10.1109%2FTCSVT.2024.3366664&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |