A RGB-Thermal Image Segmentation Method Based on Parameter Sharing and Attention Fusion for Safe Autonomous Driving

In this paper, we propose a new RGB-thermal image segmentation method based on parameter sharing and attention fusion for safe autonomous driving. An encoder-decoder network structure is adopted. The encoder, which has shared convolution layer parameters and private batch normalization layer paramet...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent transportation systems Jg. 25; H. 6; S. 5122 - 5137
Hauptverfasser: Li, Guofa, Lin, Yongjie, Ouyang, Delin, Li, Shen, Luo, Xiao, Qu, Xingda, Pi, Dawei, Li, Shengbo Eben
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York IEEE 01.06.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:1524-9050, 1558-0016
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract In this paper, we propose a new RGB-thermal image segmentation method based on parameter sharing and attention fusion for safe autonomous driving. An encoder-decoder network structure is adopted. The encoder, which has shared convolution layer parameters and private batch normalization layer parameters (parameter sharing scheme), is used to extract features from RGB and thermal images. The extracted features are then fused by spatial and channel attention. The output of each residual block is fused, and the self-learning weight is used to integrate the fusion information of all residual blocks of the same levels. Subsequently, the fused features are integrated through a feature integration (FI) module in the decoder. Cross-entropy supervision of segmentation and edge is performed on the outputs of the decoders. Our proposed method is evaluated and compared with 17 state-of-the-art image segmentation methods, both qualitatively and quantitatively on the MFNet dataset which includes various objects in urban scenes. The results show that the proposed method outperforms previous methods by at least 0.3% and 1.8% in MRecall and MIoU, respectively, providing foundations for the development of autonomous driving technologies for safety enhancement.
AbstractList In this paper, we propose a new RGB-thermal image segmentation method based on parameter sharing and attention fusion for safe autonomous driving. An encoder-decoder network structure is adopted. The encoder, which has shared convolution layer parameters and private batch normalization layer parameters (parameter sharing scheme), is used to extract features from RGB and thermal images. The extracted features are then fused by spatial and channel attention. The output of each residual block is fused, and the self-learning weight is used to integrate the fusion information of all residual blocks of the same levels. Subsequently, the fused features are integrated through a feature integration (FI) module in the decoder. Cross-entropy supervision of segmentation and edge is performed on the outputs of the decoders. Our proposed method is evaluated and compared with 17 state-of-the-art image segmentation methods, both qualitatively and quantitatively on the MFNet dataset which includes various objects in urban scenes. The results show that the proposed method outperforms previous methods by at least 0.3% and 1.8% in MRecall and MIoU, respectively, providing foundations for the development of autonomous driving technologies for safety enhancement.
Author Li, Guofa
Ouyang, Delin
Lin, Yongjie
Luo, Xiao
Pi, Dawei
Li, Shengbo Eben
Li, Shen
Qu, Xingda
Author_xml – sequence: 1
  givenname: Guofa
  orcidid: 0000-0002-7889-4695
  surname: Li
  fullname: Li, Guofa
  email: hanshan198@gmail.com
  organization: College of Mechanical and Vehicle Engineering, Chongqing University, Chongqing, China
– sequence: 2
  givenname: Yongjie
  surname: Lin
  fullname: Lin, Yongjie
  email: 2991946448@qq.com
  organization: College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen, Guangdong, China
– sequence: 3
  givenname: Delin
  orcidid: 0000-0003-3137-6089
  surname: Ouyang
  fullname: Ouyang, Delin
  email: ouyangdelin2021@email.szu.edu.cn
  organization: College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen, Guangdong, China
– sequence: 4
  givenname: Shen
  orcidid: 0000-0002-7111-8861
  surname: Li
  fullname: Li, Shen
  email: sli299@tsinghua.edu.cn
  organization: School of Civil Engineering, Tsinghua University, Beijing, China
– sequence: 5
  givenname: Xiao
  surname: Luo
  fullname: Luo, Xiao
  email: luoxiao@faw.com.cn
  organization: State Key Laboratory of Comprehensive Technology on Automobile Vibration and Noise and Safety Control, FAW Company Ltd., Intelligent Connected Vehicle Development Institute, Changchun, Jilin, China
– sequence: 6
  givenname: Xingda
  orcidid: 0000-0003-1764-0357
  surname: Qu
  fullname: Qu, Xingda
  email: quxd@szu.edu.cn
  organization: College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen, Guangdong, China
– sequence: 7
  givenname: Dawei
  orcidid: 0000-0001-9135-2623
  surname: Pi
  fullname: Pi, Dawei
  email: pidawei@mail.njust.edu.cn
  organization: School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing, China
– sequence: 8
  givenname: Shengbo Eben
  orcidid: 0000-0003-4923-3633
  surname: Li
  fullname: Li, Shengbo Eben
  email: lishbo@tsinghua.edu.cn
  organization: State Key Laboratory of Automotive Safety and Energy, School of Vehicle and Mobility, Tsinghua University, Beijing, China
BookMark eNp9kFFPwjAQxxuDiYB-ABMfmvg8vK5dxx4BBUkwGsHnpWw3GGErtp2J394u48H44D3cXS__3zX3H5BerWsk5JbBiDFIHjbLzXoUQshHnPOQR3BB-iyKxgEAk722D0WQQARXZGDtwU9FxFif2Al9X0yDzR5NpY50Wakd0jXuKqydcqWu6Qu6vc7pVFnMqX-_KaMqdGjoeq9MWe-oqnM6cc4TrX7e2LYU2gtUgXTSOF3rSjeWPpryywPX5LJQR4s35zokH_Onzew5WL0ulrPJKsjCRLiAbxnkvMjjTEKxlWEsIhlxuZUqgSxUPvvpGPxlhRBbJQEKEYlxhgIywYXkQ3Lf7T0Z_dmgdelBN6b2X6Yc_D4u4yT2qrhTZUZba7BIs7I73RlVHlMGaetw2jqctg6nZ4c9yf6QJ1NWynz_y9x1TImIv_Scxz74D8W0iEE
CODEN ITISFG
CitedBy_id crossref_primary_10_1007_s00530_024_01523_5
crossref_primary_10_1109_TITS_2024_3461468
crossref_primary_10_1109_TITS_2025_3543235
crossref_primary_10_1109_TITS_2025_3549516
crossref_primary_10_3390_e27050526
crossref_primary_10_3390_s24248217
crossref_primary_10_1007_s10489_024_05788_1
crossref_primary_10_1002_cpe_70110
crossref_primary_10_3390_s25051595
Cites_doi 10.1109/CVPRW.2016.60
10.1016/j.geits.2022.100002
10.1109/CVPR.2015.7298594
10.1109/ICCV.2017.324
10.1162/neco.1997.9.8.1735
10.1007/s10462-019-09784-7
10.1109/TIV.2022.3223728
10.1016/j.inffus.2021.02.008
10.1109/ICCV48922.2021.00986
10.1109/LRA.2022.3155202
10.1109/ICCV.2019.00533
10.1007/978-3-030-58621-8_33
10.1016/j.inffus.2021.06.008
10.1109/TCSVT.2021.3069812
10.1109/TASE.2020.2993143
10.1109/LGRS.2022.3179721
10.5555/3045118.3045167
10.1109/CVPRW56347.2022.00341
10.1007/978-3-319-24574-4_28
10.1109/CVPR.2016.90
10.1007/978-3-319-54181-5_14
10.1109/CVPR.2018.00199
10.1016/j.inffus.2018.02.004
10.1007/978-3-030-01261-8_20
10.1109/TIP.2023.3256762
10.1016/j.inffus.2020.05.002
10.1088/1742-6596/1168/2/022022
10.1016/j.neunet.2018.05.002
10.1016/j.patcog.2021.108468
10.18653/v1/P18-1005
10.1007/s10462-018-9641-3
10.1016/j.patrec.2020.01.011
10.1109/ICIP40778.2020.9191080
10.1016/j.geits.2022.100003
10.1109/TPAMI.2016.2644615
10.1109/ICCV.2017.606
10.15588/1607-3274-2018-1-14
10.1109/TIV.2022.3223131
10.1007/978-3-031-20056-4_2
10.1109/TPAMI.2017.2699184
10.1109/TPAMI.2021.3054719
10.1109/CVPR.2017.660
10.1109/TSMC.2023.3276218
10.1109/TPAMI.2021.3059968
10.1109/TVCG.2016.2598831
10.1109/CVPR.2015.7298965
10.1109/IROS.2017.8206396
10.1007/s10773-019-04177-6
10.1109/JBHI.2023.3305830
10.1016/j.geits.2022.100022
10.1109/LRA.2019.2904733
10.1109/ICIP.2019.8803025
10.1109/TSMC.2023.3283021
10.1016/j.geits.2022.100026
10.1109/TIP.2021.3109518
10.1109/tits.2023.3268063
10.1007/s41095-020-0199-z
10.10.1109/WACV.2018.00163
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
FR3
JQ2
KR7
L7M
L~C
L~D
DOI 10.1109/TITS.2023.3332350
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
Engineering Research Database
ProQuest Computer Science Collection
Civil Engineering Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Civil Engineering Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Engineering Research Database
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Civil Engineering Abstracts
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-0016
EndPage 5137
ExternalDocumentID 10_1109_TITS_2023_3332350
10337777
Genre orig-research
GrantInformation_xml – fundername: NSF China
  grantid: 52272421
  funderid: 10.13039/501100001809
– fundername: Fundamental Research Funds for the Central Universities
  grantid: 2023CDJXY-023
  funderid: 10.13039/501100012226
– fundername: NSF Chongqing
  grantid: CSTB2023NSCQ-MSX0985
  funderid: 10.13039/501100005230
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AIBXA
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
IFIPE
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
ZY4
AAYXX
CITATION
7SC
7SP
8FD
FR3
JQ2
KR7
L7M
L~C
L~D
ID FETCH-LOGICAL-c294t-3b10d3fd7c60fb627456536b6a90c2aa90fb680152f44ba600f4548ce40c43463
IEDL.DBID RIE
ISICitedReferencesCount 16
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001121230700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1524-9050
IngestDate Mon Jun 30 06:16:43 EDT 2025
Sat Nov 29 06:35:05 EST 2025
Tue Nov 18 22:25:23 EST 2025
Wed Aug 27 02:33:16 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 6
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c294t-3b10d3fd7c60fb627456536b6a90c2aa90fb680152f44ba600f4548ce40c43463
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-7889-4695
0000-0002-7111-8861
0000-0003-1764-0357
0000-0001-9135-2623
0000-0003-3137-6089
0000-0003-4923-3633
PQID 3062736797
PQPubID 75735
PageCount 16
ParticipantIDs proquest_journals_3062736797
crossref_citationtrail_10_1109_TITS_2023_3332350
crossref_primary_10_1109_TITS_2023_3332350
ieee_primary_10337777
PublicationCentury 2000
PublicationDate 2024-06-01
PublicationDateYYYYMMDD 2024-06-01
PublicationDate_xml – month: 06
  year: 2024
  text: 2024-06-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on intelligent transportation systems
PublicationTitleAbbrev TITS
PublicationYear 2024
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref57
ref56
ref59
ref14
ref58
ref53
ref52
ref11
ref55
ref10
ref54
ref16
ref18
Kendall (ref31) 2015
ref51
ref50
Chen (ref15) 2017
ref45
ref48
ref47
ref42
ref44
ref43
Simonyan (ref12) 2014
ref49
ref8
ref7
ref9
ref4
ref3
ref6
ref5
Visin (ref17) 2015
Wang (ref41); 33
ref35
ref34
ref37
ref30
Chung (ref19) 2014
ref33
ref32
Chen (ref46)
ref2
ref1
ref39
ref38
Nayak (ref36) 2022; 2020
Santurkar (ref40); 31
ref24
ref68
ref23
ref67
ref26
ref25
ref20
ref64
ref63
ref22
ref66
ref21
ref65
ref27
Zhang (ref28) 2022
ref29
ref60
ref62
ref61
References_xml – ident: ref16
  doi: 10.1109/CVPRW.2016.60
– volume: 2020
  start-page: 1607
  year: 2022
  ident: ref36
  article-title: A systematic exploration of image fusion: A review
  publication-title: ICDSMLA
– ident: ref63
  doi: 10.1016/j.geits.2022.100002
– volume: 33
  start-page: 4835
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref41
  article-title: Deep multimodal fusion by channel exchanging
– ident: ref10
  doi: 10.1109/CVPR.2015.7298594
– ident: ref43
  doi: 10.1109/ICCV.2017.324
– ident: ref18
  doi: 10.1162/neco.1997.9.8.1735
– ident: ref56
  doi: 10.1007/s10462-019-09784-7
– ident: ref4
  doi: 10.1109/TIV.2022.3223728
– ident: ref8
  doi: 10.1016/j.inffus.2021.02.008
– ident: ref21
  doi: 10.1109/ICCV48922.2021.00986
– ident: ref1
  doi: 10.1109/LRA.2022.3155202
– ident: ref44
  doi: 10.1109/ICCV.2019.00533
– ident: ref53
  doi: 10.1007/978-3-030-58621-8_33
– ident: ref32
  doi: 10.1016/j.inffus.2021.06.008
– ident: ref22
  doi: 10.1109/TCSVT.2021.3069812
– ident: ref25
  doi: 10.1109/TASE.2020.2993143
– ident: ref27
  doi: 10.1109/LGRS.2022.3179721
– year: 2017
  ident: ref15
  article-title: Rethinking Atrous convolution for semantic image segmentation
  publication-title: arXiv:1706.05587
– ident: ref39
  doi: 10.5555/3045118.3045167
– ident: ref29
  doi: 10.1109/CVPRW56347.2022.00341
– ident: ref30
  doi: 10.1007/978-3-319-24574-4_28
– year: 2014
  ident: ref19
  article-title: Empirical evaluation of gated recurrent neural networks on sequence modeling
  publication-title: arXiv:1412.3555
– ident: ref35
  doi: 10.1109/CVPR.2016.90
– ident: ref51
  doi: 10.1007/978-3-319-54181-5_14
– ident: ref48
  doi: 10.1109/CVPR.2018.00199
– ident: ref7
  doi: 10.1016/j.inffus.2018.02.004
– ident: ref49
  doi: 10.1007/978-3-030-01261-8_20
– ident: ref47
  doi: 10.1109/TIP.2023.3256762
– ident: ref34
  doi: 10.1016/j.inffus.2020.05.002
– ident: ref55
  doi: 10.1088/1742-6596/1168/2/022022
– ident: ref37
  doi: 10.1016/j.neunet.2018.05.002
– ident: ref67
  doi: 10.1016/j.patcog.2021.108468
– ident: ref57
  doi: 10.18653/v1/P18-1005
– ident: ref54
  doi: 10.1007/s10462-018-9641-3
– ident: ref58
  doi: 10.1016/j.patrec.2020.01.011
– year: 2015
  ident: ref31
  article-title: Bayesian SegNet: Model uncertainty in deep convolutional encoder–decoder architectures for scene understanding
  publication-title: arXiv:1511.02680
– ident: ref60
  doi: 10.1109/ICIP40778.2020.9191080
– volume: 31
  start-page: 1
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref40
  article-title: How does batch normalization help optimization?
– ident: ref66
  doi: 10.1016/j.geits.2022.100003
– ident: ref11
  doi: 10.1109/TPAMI.2016.2644615
– ident: ref20
  doi: 10.1109/ICCV.2017.606
– ident: ref61
  doi: 10.15588/1607-3274-2018-1-14
– ident: ref5
  doi: 10.1109/TIV.2022.3223131
– ident: ref68
  doi: 10.1007/978-3-031-20056-4_2
– year: 2022
  ident: ref28
  article-title: CMX: Cross-modal fusion for RGB-X semantic segmentation with transformers
  publication-title: arXiv:2203.04838
– ident: ref14
  doi: 10.1109/TPAMI.2017.2699184
– ident: ref45
  doi: 10.1109/TPAMI.2021.3054719
– ident: ref13
  doi: 10.1109/CVPR.2017.660
– ident: ref2
  doi: 10.1109/TSMC.2023.3276218
– ident: ref3
  doi: 10.1109/TPAMI.2021.3059968
– ident: ref42
  doi: 10.1109/TVCG.2016.2598831
– ident: ref9
  doi: 10.1109/CVPR.2015.7298965
– ident: ref23
  doi: 10.1109/IROS.2017.8206396
– ident: ref38
  doi: 10.1007/s10773-019-04177-6
– ident: ref59
  doi: 10.1109/JBHI.2023.3305830
– start-page: 794
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref46
  article-title: GradNorm: Gradient normalization for adaptive loss balancing in deep multitask networks
– ident: ref64
  doi: 10.1016/j.geits.2022.100022
– ident: ref24
  doi: 10.1109/LRA.2019.2904733
– ident: ref52
  doi: 10.1109/ICIP.2019.8803025
– ident: ref62
  doi: 10.1109/TSMC.2023.3283021
– ident: ref65
  doi: 10.1016/j.geits.2022.100026
– ident: ref26
  doi: 10.1109/TIP.2021.3109518
– ident: ref6
  doi: 10.1109/tits.2023.3268063
– year: 2015
  ident: ref17
  article-title: ReNet: A recurrent neural network based alternative to convolutional networks
  publication-title: arXiv:1505.00393
– year: 2014
  ident: ref12
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: arXiv:1409.1556
– ident: ref33
  doi: 10.1007/s41095-020-0199-z
– ident: ref50
  doi: 10.10.1109/WACV.2018.00163
SSID ssj0014511
Score 2.4932127
Snippet In this paper, we propose a new RGB-thermal image segmentation method based on parameter sharing and attention fusion for safe autonomous driving. An...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 5122
SubjectTerms Autonomous driving
Autonomous vehicles
Convolutional neural networks
Decoding
Encoders-Decoders
Feature extraction
image fusion
Image segmentation
Parameters
RGB image
Semantic segmentation
Semantics
thermal image
Thermal imaging
Transformers
Title A RGB-Thermal Image Segmentation Method Based on Parameter Sharing and Attention Fusion for Safe Autonomous Driving
URI https://ieeexplore.ieee.org/document/10337777
https://www.proquest.com/docview/3062736797
Volume 25
WOSCitedRecordID wos001121230700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1558-0016
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014511
  issn: 1524-9050
  databaseCode: RIE
  dateStart: 20000101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT-MwELYAcYDDPniI7rLIB04ruet63Lg-lt0tywGEtkXiFjmOjZAgXbUJv58ZJ0WVViCRQ5REthX5s2fG9sx8jJ0OVKlsNEZEp5zQg0IKh2JPIN7RqGDLMGzJJszV1ej21l53weopFiaEkJzPQp8e01l-OfcNbZXhDAcweG2yTWOyNljr5ciAEm2l5KhKCyuHqyPMgbQ_ZhezaZ94wvsAoIBi7NeUUGJV-U8UJ_0y-fjOP_vEPnSGJB-3yH9mG6HaY7tr6QX32XLM_56fCRwJKH0f-MUjyg4-DXePXbxRxS8TfzQ_Q1VWcny_duSrhV3NKZEzNsJdVfJxXbdOkXzS0OYaR0OXT10MfNzUFBQxb5b81-Ke9iYO2M3k9-znH9GRLAivrK4FFANZQiyNz2QsiIkHTTzIisxZ6ZXDO35FNTZUUevCoX0UNa5yfNDSa9AZHLKtal6FI8Yz6YOFCHEUQceRs0PQjnjMrIbSGttjctXrue8ykBMRxkOeViLS5gRUTkDlHVA99v2lyr82_cZbhQ8ImbWCLSg9drzCNu9m6DIHys8MmbHmyyvVvrIdbF23fmHHbKteNOEb2_ZP9f1ycZIG3zMtItUG
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Rb9MwELbGhgQ8MAZDFDbww56QXFyfG9ePHaxbxVZNtEh7ixzHRpO2FLUJv587J50qIZDIQ5REdhL5s-_O9t19jJ0MVKlsNEZEp5zQg0IKh2JPIN7RqGDLMGzJJsxsNrq5sdddsHqKhQkhJOez0KfLtJdfLn1DS2U4wgEMHo_YHlFndeFaD5sGlGorpUdVWlg53GxiDqT9tJgu5n1iCu8DgAKKst9SQ4lX5Q9hnDTMZP8__-0Fe96ZknzcYn_AdkL1kj3bSjD4iq3H_Nv5qcC-gPL3jk_vUXrwefhx30UcVfwqMUjzU1RmJcf7a0feWtjYnFI540u4q0o-ruvWLZJPGlpe42jq8rmLgY-bmsIils2af1nd0urEIfs-OVt8vhAdzYLwyupaQDGQJcTS-EzGgrh40MiDrMiclV45PONTVGRDFbUuHFpIUeM8xwctvQadwWu2Wy2r8IbxTPpgIUIcRdBx5OwQtCMmM6uhtMb2mNy0eu67HOREhXGXp7mItDkBlRNQeQdUj318qPKzTcDxr8KHhMxWwRaUHjvaYJt3Y3SdA2VohsxY8_Yv1T6wJxeLq8v8cjr7-o49xS_p1kvsiO3WqyYcs8f-V327Xr1PHfE3cLvYTw
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+RGB-Thermal+Image+Segmentation+Method+Based+on+Parameter+Sharing+and+Attention+Fusion+for+Safe+Autonomous+Driving&rft.jtitle=IEEE+transactions+on+intelligent+transportation+systems&rft.au=Li%2C+Guofa&rft.au=Lin%2C+Yongjie&rft.au=Ouyang%2C+Delin&rft.au=Li%2C+Shen&rft.date=2024-06-01&rft.issn=1524-9050&rft.eissn=1558-0016&rft.volume=25&rft.issue=6&rft.spage=5122&rft.epage=5137&rft_id=info:doi/10.1109%2FTITS.2023.3332350&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TITS_2023_3332350
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1524-9050&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1524-9050&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1524-9050&client=summon