MSCFNet: A Lightweight Network With Multi-Scale Context Fusion for Real-Time Semantic Segmentation
In recent years, how to strike a good trade-off between accuracy, inference speed, and model size has become the core issue for real-time semantic segmentation applications, which plays a vital role in real-world scenarios such as autonomous driving systems and drones. In this study, we devise a nov...
Uloženo v:
| Vydáno v: | IEEE transactions on intelligent transportation systems Ročník 23; číslo 12; s. 25489 - 25499 |
|---|---|
| Hlavní autoři: | , , , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
New York
IEEE
01.12.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Témata: | |
| ISSN: | 1524-9050, 1558-0016 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | In recent years, how to strike a good trade-off between accuracy, inference speed, and model size has become the core issue for real-time semantic segmentation applications, which plays a vital role in real-world scenarios such as autonomous driving systems and drones. In this study, we devise a novel lightweight network using a multi-scale context fusion (MSCFNet) scheme, which explores an asymmetric encoder-decoder architecture to alleviate these problems. More specifically, the encoder adopts some developed efficient asymmetric residual (EAR) modules, which are composed of factorization depth-wise convolution and dilation convolution. Meanwhile, instead of complicated computation, simple deconvolution is applied in the decoder to further reduce the amount of parameters while still maintaining the high segmentation accuracy. Also, MSCFNet has branches with efficient attention modules from different stages of the network to well capture multi-scale contextual information. Then we combine them before the final classification to enhance the expression of the features and improve the segmentation efficiency. Comprehensive experiments on challenging datasets have demonstrated that the proposed MSCFNet, which contains only 1.15M parameters, achieves 71.9% Mean IoU on the Cityscapes testing dataset and can run at over 50 FPS on a single Titan XP GPU configuration. |
|---|---|
| AbstractList | In recent years, how to strike a good trade-off between accuracy, inference speed, and model size has become the core issue for real-time semantic segmentation applications, which plays a vital role in real-world scenarios such as autonomous driving systems and drones. In this study, we devise a novel lightweight network using a multi-scale context fusion (MSCFNet) scheme, which explores an asymmetric encoder-decoder architecture to alleviate these problems. More specifically, the encoder adopts some developed efficient asymmetric residual (EAR) modules, which are composed of factorization depth-wise convolution and dilation convolution. Meanwhile, instead of complicated computation, simple deconvolution is applied in the decoder to further reduce the amount of parameters while still maintaining the high segmentation accuracy. Also, MSCFNet has branches with efficient attention modules from different stages of the network to well capture multi-scale contextual information. Then we combine them before the final classification to enhance the expression of the features and improve the segmentation efficiency. Comprehensive experiments on challenging datasets have demonstrated that the proposed MSCFNet, which contains only 1.15M parameters, achieves 71.9% Mean IoU on the Cityscapes testing dataset and can run at over 50 FPS on a single Titan XP GPU configuration. |
| Author | Xu, Guoan Yue, Dong Yu, Yi Yang, Jian Xie, Jin Gao, Guangwei |
| Author_xml | – sequence: 1 givenname: Guangwei orcidid: 0000-0002-3950-1844 surname: Gao fullname: Gao, Guangwei email: csggao@gmail.com organization: Institute of Advanced Technology, Nanjing University of Posts and Telecommunications, Nanjing, China – sequence: 2 givenname: Guoan surname: Xu fullname: Xu, Guoan email: xga_njupt@163.com organization: College of Automation and College of Artificial Intelligence, Nanjing University of Posts and Telecommunications, Nanjing, China – sequence: 3 givenname: Yi orcidid: 0000-0002-0294-6620 surname: Yu fullname: Yu, Yi email: yiyu@nii.ac.jp organization: Digital Content and Media Sciences Research Division, National Institute of Informatics, Tokyo, Japan – sequence: 4 givenname: Jin surname: Xie fullname: Xie, Jin email: csjxie@njust.edu.cn organization: School of Computer Science and Technology, Nanjing University of Science and Technology, Suzhou, China – sequence: 5 givenname: Jian orcidid: 0000-0003-4800-832X surname: Yang fullname: Yang, Jian email: csjyang@njust.edu.cn organization: School of Computer Science and Technology, Nanjing University of Science and Technology, Suzhou, China – sequence: 6 givenname: Dong orcidid: 0000-0001-7810-9338 surname: Yue fullname: Yue, Dong email: medongy@vip.163.com organization: College of Automation and College of Artificial Intelligence, Institute of Advanced Technology, Nanjing University of Posts and Telecommunications, Nanjing, China |
| BookMark | eNp9UMtOwzAQtFCRKIUPQFwscU7x2nEe3KqKQqUWJFrEMXKcNbikCTiOCn9PolYcOHDZXY1mdnbnlAyqukJCLoCNAVh6vZ6vV2POOIwFSxMh5REZgpRJwBhEg37mYZAyyU7IadNsOjSUAEOSL1fT2QP6GzqhC_v65nfYV9pBu9q90xfr3-iyLb0NVlqVSKd15fHL01nb2Lqipnb0CVUZrO0W6Qq3qvJWd8PrFiuvfMc5I8dGlQ2eH_qIPM9u19P7YPF4N59OFoHmqfBBoROFoYoNcg5RLqTIjWaAQoNJAONIAMuLImShjMCkhYlyXeQhmpgLIxItRuRqv_fD1Z8tNj7b1K2rOsuMx2EcAWdJ2rFgz9KubhqHJvtwdqvcdwYs66PM-iizPsrsEGWnif9otN3_5p2y5b_Ky73SIuKvUypZ2B0kfgASJ4N7 |
| CODEN | ITISFG |
| CitedBy_id | crossref_primary_10_1080_10298436_2025_2518488 crossref_primary_10_1109_JBHI_2024_3385098 crossref_primary_10_1007_s00371_025_04130_1 crossref_primary_10_1007_s11063_022_11107_x crossref_primary_10_1007_s11760_025_04362_7 crossref_primary_10_3390_electronics13183699 crossref_primary_10_3390_drones8020046 crossref_primary_10_1007_s40747_023_01054_y crossref_primary_10_1007_s40747_023_01056_w crossref_primary_10_1016_j_engappai_2023_107736 crossref_primary_10_1007_s11554_024_01421_z crossref_primary_10_1007_s13735_024_00321_z crossref_primary_10_4018_IJSWIS_342087 crossref_primary_10_53941_ijndi_2025_100006 crossref_primary_10_1109_TCSII_2025_3592480 crossref_primary_10_3389_fnbot_2023_1204418 crossref_primary_10_1109_TMM_2024_3405619 crossref_primary_10_3390_electronics12153348 crossref_primary_10_1016_j_eswa_2024_124697 crossref_primary_10_1007_s42421_023_00086_7 crossref_primary_10_1038_s41598_025_92957_y crossref_primary_10_1093_comjnl_bxac108 crossref_primary_10_1016_j_imavis_2023_104823 crossref_primary_10_1007_s12559_025_10407_3 crossref_primary_10_1002_rob_22406 crossref_primary_10_1109_TITS_2022_3141107 crossref_primary_10_1109_TII_2024_3409443 crossref_primary_10_1109_TITS_2023_3348631 crossref_primary_10_1109_TIV_2024_3363830 crossref_primary_10_1109_TITS_2025_3557558 crossref_primary_10_32604_cmc_2023_043524 crossref_primary_10_1007_s10489_024_05740_3 crossref_primary_10_1049_ipr2_12816 crossref_primary_10_1093_comjnl_bxac060 crossref_primary_10_1016_j_displa_2024_102688 crossref_primary_10_1016_j_knosys_2025_114417 crossref_primary_10_1371_journal_pone_0318979 crossref_primary_10_1007_s11760_025_04696_2 crossref_primary_10_1109_TGRS_2024_3487244 crossref_primary_10_3390_electronics13173361 crossref_primary_10_1016_j_media_2024_103330 crossref_primary_10_1007_s11760_024_03581_8 crossref_primary_10_1109_TMM_2024_3372835 crossref_primary_10_4018_IJVAR_307063 crossref_primary_10_1109_ACCESS_2025_3540454 crossref_primary_10_1109_TITS_2024_3359755 crossref_primary_10_20965_jaciii_2024_p0562 crossref_primary_10_1088_1361_6501_ad9106 crossref_primary_10_1016_j_acra_2025_06_010 crossref_primary_10_1007_s13042_023_02077_0 crossref_primary_10_3390_s25072043 crossref_primary_10_1109_TIP_2024_3425048 crossref_primary_10_1109_TPAMI_2025_3576857 crossref_primary_10_3390_s25154776 crossref_primary_10_1038_s41598_025_95465_1 crossref_primary_10_1109_TITS_2023_3313982 crossref_primary_10_3390_electronics13122406 crossref_primary_10_1007_s13177_024_00434_z crossref_primary_10_1016_j_imavis_2024_105053 crossref_primary_10_1016_j_iswa_2025_200542 crossref_primary_10_3390_s23146382 crossref_primary_10_1109_TITS_2024_3488656 crossref_primary_10_1016_j_jvcir_2024_104188 crossref_primary_10_1109_TMM_2022_3157995 crossref_primary_10_1109_ACCESS_2025_3536286 crossref_primary_10_1109_TITS_2023_3248089 crossref_primary_10_3390_s24010095 crossref_primary_10_1007_s40747_023_01279_x crossref_primary_10_1109_ACCESS_2024_3359425 crossref_primary_10_1016_j_cag_2024_104105 crossref_primary_10_1117_1_JEI_33_1_013008 crossref_primary_10_1093_comjnl_bxac124 crossref_primary_10_1016_j_cag_2023_07_039 crossref_primary_10_1109_TITS_2025_3538317 crossref_primary_10_1007_s10586_024_04451_1 crossref_primary_10_1016_j_neucom_2025_129489 crossref_primary_10_1016_j_eswa_2025_127680 crossref_primary_10_3390_s24041135 crossref_primary_10_1109_JSTARS_2025_3534285 crossref_primary_10_3390_s24134302 crossref_primary_10_1007_s00521_022_06932_z |
| Cites_doi | 10.1007/978-3-7908-2604-3_16 10.1109/MNET.2019.1800339 10.1109/TMM.2017.2760100 10.1109/TITS.2019.2962094 10.1109/TIP.2020.3042065 10.1109/TITS.2017.2750080 10.1109/CVPR.2016.350 10.1007/978-3-030-01219-9_25 10.1109/CVPR.2015.7298965 10.1109/CVPR.2017.195 10.1109/TIP.2020.2976856 10.1109/CVPR42600.2020.01155 10.1145/3338533.3366558 10.1016/j.ins.2019.08.004 10.1109/CVPR.2019.00326 10.1162/neco_a_00990 10.1109/CVPR.2019.00314 10.1109/ICCV.2019.00068 10.1109/CVPR.2016.90 10.1109/CVPR.2018.00388 10.1109/TPAMI.2016.2572683 10.1016/j.patcog.2020.107539 10.1109/CVPR.2018.00745 10.1109/ICIP.2019.8803154 10.1109/TII.2018.2849348 10.1016/j.patcog.2016.12.021 10.1109/TITS.2019.2913883 10.1109/CVPR.2019.00975 10.1109/TPAMI.2017.2699184 10.1109/TCYB.2020.2970104 10.1109/TPAMI.2016.2644615 10.1109/ICCV.2019.00069 10.1109/ICCVW.2019.00246 10.1109/TIP.2019.2919937 10.1109/CVPR.2019.00941 10.1007/978-3-030-01249-6_34 10.1109/ICRA40945.2020.9196599 10.1002/cpe.3927 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
| DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD FR3 JQ2 KR7 L7M L~C L~D |
| DOI | 10.1109/TITS.2021.3098355 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database Engineering Research Database ProQuest Computer Science Collection Civil Engineering Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef Civil Engineering Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Engineering Research Database Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Civil Engineering Abstracts |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EISSN | 1558-0016 |
| EndPage | 25499 |
| ExternalDocumentID | 10_1109_TITS_2021_3098355 9504476 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: Open Fund Project of Provincial Key Laboratory for Computer Information Processing Technology (Soochow University) grantid: KJS1840 – fundername: Natural Science Foundation of Jiangsu Province grantid: BK20190089 funderid: 10.13039/501100004608 – fundername: National Key Research and Development Program of China grantid: 2018AAA0100102; 2018AAA0100100 funderid: 10.13039/501100012166 – fundername: National Natural Science Foundation of China grantid: 61972212; 61772568; 61833011 funderid: 10.13039/501100001809 – fundername: Six Talent Peaks Project, Jiangsu Province grantid: RJFW-011 funderid: 10.13039/501100010014 |
| GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT AENEX AETIX AGQYO AGSQL AHBIQ AIBXA AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P PQQKQ RIA RIE RNS ZY4 AAYXX CITATION 7SC 7SP 8FD FR3 JQ2 KR7 L7M L~C L~D |
| ID | FETCH-LOGICAL-c293t-dc8ae4a7fe2216b353bfc01e3c1f81e76310bdd404561f9df6bcdb4ef723f38c3 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 102 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000732278400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1524-9050 |
| IngestDate | Mon Jun 30 06:57:35 EDT 2025 Sat Nov 29 06:34:57 EST 2025 Tue Nov 18 21:57:15 EST 2025 Wed Aug 27 02:15:00 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 12 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c293t-dc8ae4a7fe2216b353bfc01e3c1f81e76310bdd404561f9df6bcdb4ef723f38c3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0001-7810-9338 0000-0002-3950-1844 0000-0003-4800-832X 0000-0002-0294-6620 |
| PQID | 2747612089 |
| PQPubID | 75735 |
| PageCount | 11 |
| ParticipantIDs | crossref_primary_10_1109_TITS_2021_3098355 proquest_journals_2747612089 ieee_primary_9504476 crossref_citationtrail_10_1109_TITS_2021_3098355 |
| PublicationCentury | 2000 |
| PublicationDate | 2022-12-01 |
| PublicationDateYYYYMMDD | 2022-12-01 |
| PublicationDate_xml | – month: 12 year: 2022 text: 2022-12-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | IEEE transactions on intelligent transportation systems |
| PublicationTitleAbbrev | TITS |
| PublicationYear | 2022 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref12 ref15 ref53 chen (ref25) 2017 ref52 ref11 ref10 yu (ref26) 2015 ref17 ref19 ref18 chen (ref51) 2014 poudel (ref48) 2018 ref50 sun (ref14) 2019 gao (ref8) 2020 ref46 ref45 ref47 ref42 ref41 ref7 ref9 ref4 paszke (ref16) 2016 ref3 ref6 ref5 yang (ref30) 2020 ref40 chen (ref24) 0; 2018 ref35 ref34 ref37 ref36 howard (ref21) 2017 yu (ref38) 2018 ref31 ref33 ref32 ref2 ref1 ref39 ref23 ref20 ref28 poudel (ref49) 2019 ref27 ref29 li (ref22) 2019 woo (ref44) 2018 hu (ref43) 2018 |
| References_xml | – start-page: 9401 year: 2018 ident: ref43 article-title: Gather-excite: Exploiting feature context in convolutional neural networks publication-title: Proc Adv Neural Inf Process Syst (NeurIPS) – ident: ref53 doi: 10.1007/978-3-7908-2604-3_16 – year: 2020 ident: ref8 article-title: Hierarchical deep CNN feature set-based representation learning for robust cross-resolution face recognition publication-title: IEEE Trans Circuits Syst Video Technol – ident: ref1 doi: 10.1109/MNET.2019.1800339 – ident: ref3 doi: 10.1109/TMM.2017.2760100 – ident: ref32 doi: 10.1109/TITS.2019.2962094 – year: 2019 ident: ref49 article-title: Fast-SCNN: Fast semantic segmentation network publication-title: arXiv 1902 04502 – ident: ref47 doi: 10.1109/TIP.2020.3042065 – ident: ref18 doi: 10.1109/TITS.2017.2750080 – ident: ref52 doi: 10.1109/CVPR.2016.350 – ident: ref37 doi: 10.1007/978-3-030-01219-9_25 – year: 2014 ident: ref51 article-title: Semantic image segmentation with deep convolutional nets and fully connected CRFs publication-title: arXiv 1412 7062 – ident: ref50 doi: 10.1109/CVPR.2015.7298965 – start-page: 325 year: 2018 ident: ref38 article-title: BiSeNet: Bilateral segmentation network for real-time semantic segmentation publication-title: Proc Eur Conf Comput Vis (ECCV) – year: 2019 ident: ref22 article-title: DABNet: Depth-wise asymmetric bottleneck for real-time semantic segmentation publication-title: arXiv 1907 11357 – ident: ref20 doi: 10.1109/CVPR.2017.195 – ident: ref29 doi: 10.1109/TIP.2020.2976856 – ident: ref46 doi: 10.1109/CVPR42600.2020.01155 – ident: ref27 doi: 10.1145/3338533.3366558 – ident: ref2 doi: 10.1016/j.ins.2019.08.004 – ident: ref13 doi: 10.1109/CVPR.2019.00326 – ident: ref7 doi: 10.1162/neco_a_00990 – ident: ref45 doi: 10.1109/CVPR.2019.00314 – ident: ref41 doi: 10.1109/ICCV.2019.00068 – ident: ref6 doi: 10.1109/CVPR.2016.90 – ident: ref12 doi: 10.1109/CVPR.2018.00388 – year: 2019 ident: ref14 article-title: High-resolution representations for labeling pixels and regions publication-title: arXiv 1904 04514 – ident: ref9 doi: 10.1109/TPAMI.2016.2572683 – volume: 2018 start-page: 801 year: 0 ident: ref24 article-title: Encoder-decoder with atrous separable convolution for semantic image segmentation publication-title: Proc Eur Conf Comput Vis (ECCV) – ident: ref5 doi: 10.1016/j.patcog.2020.107539 – year: 2017 ident: ref25 article-title: Rethinking atrous convolution for semantic image segmentation publication-title: arXiv 1706 05587 – ident: ref42 doi: 10.1109/CVPR.2018.00745 – ident: ref23 doi: 10.1109/ICIP.2019.8803154 – ident: ref33 doi: 10.1109/TII.2018.2849348 – ident: ref4 doi: 10.1016/j.patcog.2016.12.021 – ident: ref31 doi: 10.1109/TITS.2019.2913883 – year: 2017 ident: ref21 article-title: MobileNets: Efficient convolutional neural networks for mobile vision applications publication-title: arXiv 1704 04861 – year: 2016 ident: ref16 article-title: ENet: A deep neural network architecture for real-time semantic segmentation publication-title: ArXiv 1606 02147 – ident: ref35 doi: 10.1109/CVPR.2019.00975 – ident: ref11 doi: 10.1109/TPAMI.2017.2699184 – start-page: 3 year: 2018 ident: ref44 article-title: CBAM: Convolutional block attention module publication-title: Proc Eur Conf Comput Vis (ECCV) – ident: ref15 doi: 10.1109/TCYB.2020.2970104 – ident: ref17 doi: 10.1109/TPAMI.2016.2644615 – year: 2018 ident: ref48 article-title: ContextNet: Exploring context and detail for semantic segmentation in real-time publication-title: arXiv 1805 04554 – ident: ref39 doi: 10.1109/ICCV.2019.00069 – ident: ref40 doi: 10.1109/ICCVW.2019.00246 – ident: ref34 doi: 10.1109/TIP.2019.2919937 – ident: ref19 doi: 10.1109/CVPR.2019.00941 – ident: ref36 doi: 10.1007/978-3-030-01249-6_34 – year: 2020 ident: ref30 article-title: NDNet: Narrow while deep network for real-time semantic segmentation publication-title: IEEE Trans Intell Transp Syst – year: 2015 ident: ref26 article-title: Multi-scale context aggregation by dilated convolutions publication-title: arXiv 1511 07122 – ident: ref28 doi: 10.1109/ICRA40945.2020.9196599 – ident: ref10 doi: 10.1002/cpe.3927 |
| SSID | ssj0014511 |
| Score | 2.6353674 |
| Snippet | In recent years, how to strike a good trade-off between accuracy, inference speed, and model size has become the core issue for real-time semantic segmentation... |
| SourceID | proquest crossref ieee |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 25489 |
| SubjectTerms | Accuracy Asymmetry Coders Context context fusion Convolution Datasets Ear Encoders-Decoders encoder–decoder architecture Feature extraction Image segmentation Lightweight lightweight network Modules Parameters Real time Real-time semantic segmentation Real-time systems Semantic segmentation Semantics Task analysis |
| Title | MSCFNet: A Lightweight Network With Multi-Scale Context Fusion for Real-Time Semantic Segmentation |
| URI | https://ieeexplore.ieee.org/document/9504476 https://www.proquest.com/docview/2747612089 |
| Volume | 23 |
| WOSCitedRecordID | wos000732278400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1558-0016 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014511 issn: 1524-9050 databaseCode: RIE dateStart: 20000101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PTxQxFH4B4gEPoKJxAUkPnIyFdtpup94IcYMJboi7Rm6TmddX3EQWA7PAn0_bnd1oICbcemgzk_k6fT_6vu8B7OtaB68c8bJWnmsizV1AwxFlqFXSCrCZKHxqh8Py_NydrcCnJReGiHLxGR2kYb7L91c4S6myQ2eE1ra_CqvW2jlXa3ljkHS2sjZqEZ8mzOIGUwp3OP46HsVIsJAHSrjocZh_bFBuqvLoJM7mZbD5vBd7BRudG8mO5ri_hhWavoGXf4kLbkHzbXQ8GFL7mR2x0xSC3-UsKBvOC7_Zz0n7i2X-LR9FoIhloar7lg1mKYPGojfLvkc3kieWCBvRZcRggnFwcdnxlaZv4cfgy_j4hHcdFThGs95yj2VNuraBikL2G2VUE1BIUhGZUlI8a6RovNfJz5PB-dBv0Deagi1UUCWqd7A2vZrSe2CFd0ZbNEEa0g5VrTUagU20d6hMbXsgFt-4wk5uPHW9-F3lsEO4KsFSJViqDpYefFwu-TPX2vjf5K2Ew3JiB0EPdhdAVt3feFOlyDt6cqJ020-v2oH1ItEacpnKLqy11zP6AC_wtp3cXO_ljfYApt_Qtg |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PTxQxFH5BMFEPoIJxBbQHT8ZCO213pt4IYQNxmRh3jdwmM6-vsIksBmaVP5-2O7vBaEy49dBmJvN1-n70fd8DeK9r7Z2yxItaOa6JNLceDUeUvlZRKyBPROFhXpbF2Zn9sgIfl1wYIkrFZ7QXh-ku313hLKbK9q0RWuf9R7BmtM7knK21vDOISltJHTULzxNmcYcphd0fn4xHIRbM5J4SNvgc5g8rlNqq_HUWJwMz2HjYqz2H9c6RZAdz5F_ACk1fwrN78oKb0JyODgcltZ_YARvGIPx3yoOycl76zb5P2guWGLh8FKAilqSqbls2mMUcGgv-LPsaHEkeeSJsRJcBhQmGwfllx1iabsG3wdH48Jh3PRU4BsPecodFTbrOPWWZ7DfKqMajkKQCNoWkcNpI0Tino6cnvXW-36BrNPk8U14VqF7B6vRqSq-BZc4anaPx0pC2qGqt0QhsgsVDZeq8B2LxjSvsBMdj34sfVQo8hK0iLFWEpepg6cGH5ZKfc7WN_03ejDgsJ3YQ9GBnAWTV_Y83VYy9gy8nCvvm36vewZPj8emwGp6Un7fhaRZJDqloZQdW2-sZ7cJj_NVObq7fpk13B6u50_0 |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MSCFNet%3A+A+Lightweight+Network+With+Multi-Scale+Context+Fusion+for+Real-Time+Semantic+Segmentation&rft.jtitle=IEEE+transactions+on+intelligent+transportation+systems&rft.au=Gao%2C+Guangwei&rft.au=Xu%2C+Guoan&rft.au=Yu%2C+Yi&rft.au=Xie%2C+Jin&rft.date=2022-12-01&rft.pub=IEEE&rft.issn=1524-9050&rft.volume=23&rft.issue=12&rft.spage=25489&rft.epage=25499&rft_id=info:doi/10.1109%2FTITS.2021.3098355&rft.externalDocID=9504476 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1524-9050&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1524-9050&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1524-9050&client=summon |