Unified DeepLabV3+ for Semi-Dark Image Semantic Segmentation
Semantic segmentation for accurate visual perception is a critical task in computer vision. In principle, the automatic classification of dynamic visual scenes using predefined object classes remains unresolved. The challenging problems of learning deep convolution neural networks, specifically ResN...
Uložené v:
| Vydané v: | Sensors (Basel, Switzerland) Ročník 22; číslo 14; s. 5312 |
|---|---|
| Hlavní autori: | , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Basel
MDPI AG
15.07.2022
MDPI |
| Predmet: | |
| ISSN: | 1424-8220, 1424-8220 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | Semantic segmentation for accurate visual perception is a critical task in computer vision. In principle, the automatic classification of dynamic visual scenes using predefined object classes remains unresolved. The challenging problems of learning deep convolution neural networks, specifically ResNet-based DeepLabV3+ (the most recent version), are threefold. The problems arise due to (1) biased centric exploitations of filter masks, (2) lower representational power of residual networks due to identity shortcuts, and (3) a loss of spatial relationship by using per-pixel primitives. To solve these problems, we present a proficient approach based on DeepLabV3+, along with an added evaluation metric, namely, Unified DeepLabV3+ and S3core, respectively. The presented unified version reduced the effect of biased exploitations via additional dilated convolution layers with customized dilation rates. We further tackled the problem of representational power by introducing non-linear group normalization shortcuts to solve the focused problem of semi-dark images. Meanwhile, to keep track of the spatial relationships in terms of the global and local contexts, geometrically bunched pixel cues were used. We accumulated all the proposed variants of DeepLabV3+ to propose Unified DeepLabV3+ for accurate visual decisions. Finally, the proposed S3core evaluation metric was based on the weighted combination of three different accuracy measures, i.e., the pixel accuracy, IoU (intersection over union), and Mean BFScore, as robust identification criteria. Extensive experimental analysis performed over a CamVid dataset confirmed the applicability of the proposed solution for autonomous vehicles and robotics for outdoor settings. The experimental analysis showed that the proposed Unified DeepLabV3+ outperformed DeepLabV3+ by a margin of 3% in terms of the class-wise pixel accuracy, along with a higher S3core, depicting the effectiveness of the proposed approach. |
|---|---|
| AbstractList | Semantic segmentation for accurate visual perception is a critical task in computer vision. In principle, the automatic classification of dynamic visual scenes using predefined object classes remains unresolved. The challenging problems of learning deep convolution neural networks, specifically ResNet-based DeepLabV3+ (the most recent version), are threefold. The problems arise due to (1) biased centric exploitations of filter masks, (2) lower representational power of residual networks due to identity shortcuts, and (3) a loss of spatial relationship by using per-pixel primitives. To solve these problems, we present a proficient approach based on DeepLabV3+, along with an added evaluation metric, namely, Unified DeepLabV3+ and S3core, respectively. The presented unified version reduced the effect of biased exploitations via additional dilated convolution layers with customized dilation rates. We further tackled the problem of representational power by introducing non-linear group normalization shortcuts to solve the focused problem of semi-dark images. Meanwhile, to keep track of the spatial relationships in terms of the global and local contexts, geometrically bunched pixel cues were used. We accumulated all the proposed variants of DeepLabV3+ to propose Unified DeepLabV3+ for accurate visual decisions. Finally, the proposed S3core evaluation metric was based on the weighted combination of three different accuracy measures, i.e., the pixel accuracy, IoU (intersection over union), and Mean BFScore, as robust identification criteria. Extensive experimental analysis performed over a CamVid dataset confirmed the applicability of the proposed solution for autonomous vehicles and robotics for outdoor settings. The experimental analysis showed that the proposed Unified DeepLabV3+ outperformed DeepLabV3+ by a margin of 3% in terms of the class-wise pixel accuracy, along with a higher S3core, depicting the effectiveness of the proposed approach. Semantic segmentation for accurate visual perception is a critical task in computer vision. In principle, the automatic classification of dynamic visual scenes using predefined object classes remains unresolved. The challenging problems of learning deep convolution neural networks, specifically ResNet-based DeepLabV3+ (the most recent version), are threefold. The problems arise due to (1) biased centric exploitations of filter masks, (2) lower representational power of residual networks due to identity shortcuts, and (3) a loss of spatial relationship by using per-pixel primitives. To solve these problems, we present a proficient approach based on DeepLabV3+, along with an added evaluation metric, namely, Unified DeepLabV3+ and S3core, respectively. The presented unified version reduced the effect of biased exploitations via additional dilated convolution layers with customized dilation rates. We further tackled the problem of representational power by introducing non-linear group normalization shortcuts to solve the focused problem of semi-dark images. Meanwhile, to keep track of the spatial relationships in terms of the global and local contexts, geometrically bunched pixel cues were used. We accumulated all the proposed variants of DeepLabV3+ to propose Unified DeepLabV3+ for accurate visual decisions. Finally, the proposed S3core evaluation metric was based on the weighted combination of three different accuracy measures, i.e., the pixel accuracy, IoU (intersection over union), and Mean BFScore, as robust identification criteria. Extensive experimental analysis performed over a CamVid dataset confirmed the applicability of the proposed solution for autonomous vehicles and robotics for outdoor settings. The experimental analysis showed that the proposed Unified DeepLabV3+ outperformed DeepLabV3+ by a margin of 3% in terms of the class-wise pixel accuracy, along with a higher S3core, depicting the effectiveness of the proposed approach.Semantic segmentation for accurate visual perception is a critical task in computer vision. In principle, the automatic classification of dynamic visual scenes using predefined object classes remains unresolved. The challenging problems of learning deep convolution neural networks, specifically ResNet-based DeepLabV3+ (the most recent version), are threefold. The problems arise due to (1) biased centric exploitations of filter masks, (2) lower representational power of residual networks due to identity shortcuts, and (3) a loss of spatial relationship by using per-pixel primitives. To solve these problems, we present a proficient approach based on DeepLabV3+, along with an added evaluation metric, namely, Unified DeepLabV3+ and S3core, respectively. The presented unified version reduced the effect of biased exploitations via additional dilated convolution layers with customized dilation rates. We further tackled the problem of representational power by introducing non-linear group normalization shortcuts to solve the focused problem of semi-dark images. Meanwhile, to keep track of the spatial relationships in terms of the global and local contexts, geometrically bunched pixel cues were used. We accumulated all the proposed variants of DeepLabV3+ to propose Unified DeepLabV3+ for accurate visual decisions. Finally, the proposed S3core evaluation metric was based on the weighted combination of three different accuracy measures, i.e., the pixel accuracy, IoU (intersection over union), and Mean BFScore, as robust identification criteria. Extensive experimental analysis performed over a CamVid dataset confirmed the applicability of the proposed solution for autonomous vehicles and robotics for outdoor settings. The experimental analysis showed that the proposed Unified DeepLabV3+ outperformed DeepLabV3+ by a margin of 3% in terms of the class-wise pixel accuracy, along with a higher S3core, depicting the effectiveness of the proposed approach. |
| Author | Rizvi, Syed Sajjad Junejo, Aisha Zahid Memon, Mehak Maqbool Hashmani, Manzoor Ahmed Raza, Kamran |
| AuthorAffiliation | 2 Department of Computer Science, Shaheed Zulfiqar Ali Bhutto Institute of Science and Technology, Karachi 75600, Pakistan; sshussainr@gmail.com 3 Faculty of Engineering Science and Technology, Iqra University, Karachi 75600, Pakistan; kraza@iqra.edu.pk 1 High Performance Cloud Computing Center (HPC3), Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia; mehak_19001057@utp.edu.my (M.M.M.); aisha_19001022@utp.edu.my (A.Z.J.) |
| AuthorAffiliation_xml | – name: 1 High Performance Cloud Computing Center (HPC3), Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia; mehak_19001057@utp.edu.my (M.M.M.); aisha_19001022@utp.edu.my (A.Z.J.) – name: 3 Faculty of Engineering Science and Technology, Iqra University, Karachi 75600, Pakistan; kraza@iqra.edu.pk – name: 2 Department of Computer Science, Shaheed Zulfiqar Ali Bhutto Institute of Science and Technology, Karachi 75600, Pakistan; sshussainr@gmail.com |
| Author_xml | – sequence: 1 givenname: Mehak Maqbool orcidid: 0000-0003-3921-0104 surname: Memon fullname: Memon, Mehak Maqbool – sequence: 2 givenname: Manzoor Ahmed orcidid: 0000-0002-6617-8149 surname: Hashmani fullname: Hashmani, Manzoor Ahmed – sequence: 3 givenname: Aisha Zahid orcidid: 0000-0001-9815-2704 surname: Junejo fullname: Junejo, Aisha Zahid – sequence: 4 givenname: Syed Sajjad surname: Rizvi fullname: Rizvi, Syed Sajjad – sequence: 5 givenname: Kamran surname: Raza fullname: Raza, Kamran |
| BookMark | eNplkclqHDEQhoWx8ZYc_AYDviSEjrX1IggGYyfOwEAOiXMV1erSWONuaSz1BPz2Vmec4OWkKumrv35VHZFdHzwScsLoZyEUPUucM1kKxnfIIZNcFg3ndPdZfECOUlpRyoUQzT45EGWjqFL8kHy58c467GZXiOsFtL_Fp5kNcfYTB1dcQbybzQdY4pSDH53JwXJAP8Logn9H9iz0Cd8_ncfk5tvXX5ffi8WP6_nlxaIwUlZjwepaoJRGIq2rGjhYC51itsFsA4SVlKtWKVOzui0rMKLpGlGrlrWsE7KsxTGZb3W7ACu9jm6A-KADOP33IsSlhpjN9agtgC05mrJTSnKUDdKKGmOFoZBTlrXOt1rrTTtgZ_JfIvQvRF--eHerl-GPVoJLpSYzH54EYrjfYBr14JLBvgePYZM0r1TJ1bSPjJ6-QldhE30e1URJWgmpqkydbSkTQ0oRrTZuO9_c3_WaUT1tWf_fcq74-Krin_237CNvlaVO |
| CitedBy_id | crossref_primary_10_1016_j_bspc_2024_106691 crossref_primary_10_3390_s23198338 crossref_primary_10_1111_cas_16394 crossref_primary_10_1016_j_compag_2023_107875 |
| Cites_doi | 10.1109/CVPR.2015.7298655 10.1016/j.neucom.2019.01.016 10.1007/978-3-642-15555-0_26 10.3390/rs14051128 10.24963/ijcai.2017/661 10.1007/978-3-030-01261-8_1 10.3390/app8050837 10.1109/ICCV.2015.203 10.1109/ICCV.2015.179 10.1109/TPAMI.2012.231 10.1109/TPAMI.2016.2644615 10.1007/978-3-540-88690-7_3 10.1109/CVPR.2017.757 10.1109/LRA.2019.2896518 10.1007/978-3-540-74496-2_36 10.1109/CVPR.2016.350 10.1109/TPAMI.2017.2699184 10.5244/C.27.32 10.1145/1553374.1553479 10.1109/ICCV.2009.5459211 10.1007/978-3-030-01234-2_49 10.1109/CVPR.2016.396 10.1109/CVPR.2016.90 10.1109/IVS.2018.8500497 10.1007/978-3-319-10602-1_48 10.3390/rs13010119 10.1007/978-3-030-75847-9_8 10.1109/5.726791 10.1109/CVPR.2013.263 10.1109/CVPR.2018.00347 10.1016/j.neucom.2019.02.003 10.1007/978-3-540-88682-2_5 10.1016/j.patrec.2008.04.005 10.1007/s11263-014-0733-5 10.1109/CVPR.2013.447 10.3390/app11188694 |
| ContentType | Journal Article |
| Copyright | 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. 2022 by the authors. 2022 |
| Copyright_xml | – notice: 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: 2022 by the authors. 2022 |
| DBID | AAYXX CITATION 3V. 7X7 7XB 88E 8FI 8FJ 8FK ABUWG AFKRA AZQEC BENPR CCPQU DWQXO FYUFA GHDGH K9. M0S M1P PHGZM PHGZT PIMPY PJZUB PKEHL PPXIY PQEST PQQKQ PQUKI PRINS 7X8 5PM DOA |
| DOI | 10.3390/s22145312 |
| DatabaseName | CrossRef ProQuest Central (Corporate) Health & Medical Collection (ProQuest) ProQuest Central (purchase pre-March 2016) Medical Database (Alumni Edition) ProQuest Hospital Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials ProQuest Central ProQuest One ProQuest Central Korea Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Health & Medical Complete (Alumni) ProQuest Health & Medical Collection Medical Database ProQuest ProQuest Central Premium ProQuest One Academic Publicly Available Content Database ProQuest Health & Medical Research Collection ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic (retired) ProQuest One Academic UKI Edition ProQuest Central China MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
| DatabaseTitle | CrossRef Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest One Health & Nursing ProQuest Central China ProQuest Central ProQuest Health & Medical Research Collection Health Research Premium Collection Health and Medicine Complete (Alumni Edition) ProQuest Central Korea Health & Medical Research Collection ProQuest Central (New) ProQuest Medical Library (Alumni) ProQuest One Academic Eastern Edition ProQuest Hospital Collection Health Research Premium Collection (Alumni) ProQuest Hospital Collection (Alumni) ProQuest Health & Medical Complete ProQuest Medical Library ProQuest One Academic UKI Edition ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) MEDLINE - Academic |
| DatabaseTitleList | Publicly Available Content Database MEDLINE - Academic CrossRef |
| Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: PIMPY name: Publicly Available Content Database url: http://search.proquest.com/publiccontent sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Architecture |
| EISSN | 1424-8220 |
| ExternalDocumentID | oai_doaj_org_article_faaf52ec5d9942e48e060ccf3c0a2e41 PMC9324997 10_3390_s22145312 |
| GrantInformation_xml | – fundername: Iqra University, Pakistan – fundername: Universiti Teknologi PETRONAS (UTP), Malaysia grantid: 015MEO-227 |
| GroupedDBID | --- 123 2WC 53G 5VS 7X7 88E 8FE 8FG 8FI 8FJ AADQD AAHBH AAYXX ABDBF ABUWG ACUHS ADBBV ADMLS AENEX AFFHD AFKRA AFZYC ALMA_UNASSIGNED_HOLDINGS BENPR BPHCQ BVXVI CCPQU CITATION CS3 D1I DU5 E3Z EBD ESX F5P FYUFA GROUPED_DOAJ GX1 HH5 HMCUK HYE IAO ITC KQ8 L6V M1P M48 MODMG M~E OK1 OVT P2P P62 PHGZM PHGZT PIMPY PJZUB PPXIY PQQKQ PROAC PSQYO RNS RPM TUS UKHRP XSB ~8M 3V. 7XB 8FK AZQEC DWQXO K9. PKEHL PQEST PQUKI PRINS 7X8 PUEGO 5PM |
| ID | FETCH-LOGICAL-c446t-1773e44c4e0767a2affad91f8e589a3f4029b99c717b56ac38d8379b1b1d34573 |
| IEDL.DBID | DOA |
| ISICitedReferencesCount | 6 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000831895200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1424-8220 |
| IngestDate | Fri Oct 03 12:53:18 EDT 2025 Tue Nov 04 01:59:09 EST 2025 Thu Oct 02 07:44:39 EDT 2025 Tue Oct 07 07:08:05 EDT 2025 Sat Nov 29 07:16:16 EST 2025 Tue Nov 18 20:58:05 EST 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 14 |
| Language | English |
| License | Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c446t-1773e44c4e0767a2affad91f8e589a3f4029b99c717b56ac38d8379b1b1d34573 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0003-3921-0104 0000-0002-6617-8149 0000-0001-9815-2704 |
| OpenAccessLink | https://doaj.org/article/faaf52ec5d9942e48e060ccf3c0a2e41 |
| PMID | 35890992 |
| PQID | 2694063496 |
| PQPubID | 2032333 |
| ParticipantIDs | doaj_primary_oai_doaj_org_article_faaf52ec5d9942e48e060ccf3c0a2e41 pubmedcentral_primary_oai_pubmedcentral_nih_gov_9324997 proquest_miscellaneous_2695291453 proquest_journals_2694063496 crossref_citationtrail_10_3390_s22145312 crossref_primary_10_3390_s22145312 |
| PublicationCentury | 2000 |
| PublicationDate | 20220715 |
| PublicationDateYYYYMMDD | 2022-07-15 |
| PublicationDate_xml | – month: 7 year: 2022 text: 20220715 day: 15 |
| PublicationDecade | 2020 |
| PublicationPlace | Basel |
| PublicationPlace_xml | – name: Basel |
| PublicationTitle | Sensors (Basel, Switzerland) |
| PublicationYear | 2022 |
| Publisher | MDPI AG MDPI |
| Publisher_xml | – name: MDPI AG – name: MDPI |
| References | Farabet (ref_28) 2012; 35 ref_14 ref_36 ref_13 ref_12 ref_34 ref_33 ref_32 ref_31 ref_30 ref_19 ref_18 ref_39 ref_16 ref_38 ref_15 ref_37 Zhou (ref_35) 2019; 340 Chen (ref_10) 2018; 40 Badrinarayanan (ref_11) 2017; 39 Zhou (ref_17) 2019; 4 Brostow (ref_25) 2009; 30 Lateef (ref_8) 2019; 338 Everingham (ref_22) 2015; 111 ref_24 ref_23 ref_44 ref_21 ref_43 ref_20 ref_42 ref_41 ref_40 ref_1 ref_3 ref_2 ref_29 LeCun (ref_7) 1998; 86 ref_27 ref_26 ref_9 ref_5 ref_4 ref_6 |
| References_xml | – ident: ref_9 – ident: ref_26 doi: 10.1109/CVPR.2015.7298655 – volume: 340 start-page: 196 year: 2019 ident: ref_35 article-title: Superpixel based continuous conditional random field neural network for semantic segmentation publication-title: Neurocomputing doi: 10.1016/j.neucom.2019.01.016 – ident: ref_30 doi: 10.1007/978-3-642-15555-0_26 – ident: ref_4 doi: 10.3390/rs14051128 – ident: ref_16 doi: 10.24963/ijcai.2017/661 – ident: ref_39 doi: 10.1007/978-3-030-01261-8_1 – ident: ref_40 – ident: ref_18 doi: 10.3390/app8050837 – ident: ref_32 doi: 10.1109/ICCV.2015.203 – ident: ref_19 doi: 10.1109/ICCV.2015.179 – volume: 35 start-page: 1915 year: 2012 ident: ref_28 article-title: Learning hierarchical features for scene labeling publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2012.231 – volume: 39 start-page: 2481 year: 2017 ident: ref_11 article-title: Segnet: A deep convolutional encoder-decoder architecture for image segmentation publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2016.2644615 – ident: ref_29 doi: 10.1007/978-3-540-88690-7_3 – ident: ref_21 – ident: ref_34 doi: 10.1109/CVPR.2017.757 – volume: 4 start-page: 1792 year: 2019 ident: ref_17 article-title: Normalization in training U-Net for 2-D biomedical semantic segmentation publication-title: IEEE Robot. Autom. Lett. doi: 10.1109/LRA.2019.2896518 – ident: ref_44 doi: 10.1007/978-3-540-74496-2_36 – ident: ref_24 doi: 10.1109/CVPR.2016.350 – volume: 40 start-page: 834 year: 2018 ident: ref_10 article-title: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2017.2699184 – ident: ref_42 doi: 10.5244/C.27.32 – ident: ref_20 doi: 10.1145/1553374.1553479 – ident: ref_31 doi: 10.1109/ICCV.2009.5459211 – ident: ref_33 – ident: ref_14 doi: 10.1007/978-3-030-01234-2_49 – ident: ref_27 – ident: ref_6 doi: 10.1109/CVPR.2016.396 – ident: ref_12 doi: 10.1109/CVPR.2016.90 – ident: ref_43 doi: 10.1109/IVS.2018.8500497 – ident: ref_23 doi: 10.1007/978-3-319-10602-1_48 – ident: ref_3 doi: 10.3390/rs13010119 – ident: ref_5 doi: 10.1007/978-3-030-75847-9_8 – volume: 86 start-page: 2278 year: 1998 ident: ref_7 article-title: Gradient-based learning applied to document recognition publication-title: Proc. IEEE doi: 10.1109/5.726791 – ident: ref_36 doi: 10.1109/CVPR.2013.263 – ident: ref_2 doi: 10.1109/CVPR.2018.00347 – volume: 338 start-page: 321 year: 2019 ident: ref_8 article-title: Survey on semantic segmentation using deep learning techniques publication-title: Neurocomputing doi: 10.1016/j.neucom.2019.02.003 – ident: ref_15 – ident: ref_41 doi: 10.1007/978-3-540-88682-2_5 – ident: ref_13 – volume: 30 start-page: 88 year: 2009 ident: ref_25 article-title: Semantic object classes in video: A high-definition ground truth database publication-title: Pattern Recognit. Lett. doi: 10.1016/j.patrec.2008.04.005 – volume: 111 start-page: 98 year: 2015 ident: ref_22 article-title: The pascal visual object classes challenge: A retrospective publication-title: Int. J. Comput. Vis. doi: 10.1007/s11263-014-0733-5 – ident: ref_38 – ident: ref_37 doi: 10.1109/CVPR.2013.447 – ident: ref_1 doi: 10.3390/app11188694 |
| SSID | ssj0023338 |
| Score | 2.442811 |
| Snippet | Semantic segmentation for accurate visual perception is a critical task in computer vision. In principle, the automatic classification of dynamic visual scenes... |
| SourceID | doaj pubmedcentral proquest crossref |
| SourceType | Open Website Open Access Repository Aggregation Database Enrichment Source Index Database |
| StartPage | 5312 |
| SubjectTerms | Architecture atrous convolutions Automation Computer vision high-resolution images Neural networks semantic segmentation Semantics super-pixels urban environments |
| SummonAdditionalLinks | – databaseName: Health & Medical Collection (ProQuest) dbid: 7X7 link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3da9RAEB9q64MVrFbFaCtRfBAkNNmP7C4I0k8UShHUcm9hs9mth73ceXf173cmt3e9gPjStyQ7JEtmdnZ-uzu_AXiXhyCdROPVWthMcG8yY02Z1TVz2srS-C5D7vJcXVzowcB8jQtus3iscukTO0fdjB2tkR9QxiVOp8KUnya_M6oaRbursYTGPdiistlk52pwC7g44q8FmxBHaH8wY0TLzQvWm4M6qv5efNk_Hbk23Zzt3LWjj-FRDDTTw4VlPIEN3-7Cw8O1fYNd2F5jI3wKHzH-DBiRpifeT85tfck_pBjSpt_8aJid2Omv9MsIvQ_dozqGDi-uRjF1qX0GP85Ovx9_zmJxhcwhApxnhVLcC-GEz1WpLLMh2MYUQXupjeUBcaWpjXEI92pZWsd1g1jW1EVdNFxIxZ_DZjtu_QtI0XdLpvAFkk79uVIHhkGOF1yE0BjlEni__N2Vi8zjVADjukIEQpqpVppJ4O1KdLKg2_iX0BHpbCVADNndg_H0qooDrgrWBsm8k40xgnmhfV7mzgXucou3RQJ7S_VVcdjOqlvdJfBm1YwDjnZRbOvHN52MZIZ6koDqWUqvQ_2Wdvizo-7GaBkhpnr5_4-_ggeMsiyIv1PuweZ8euP34b77Mx_Opq87G_8LErQHfQ priority: 102 providerName: ProQuest |
| Title | Unified DeepLabV3+ for Semi-Dark Image Semantic Segmentation |
| URI | https://www.proquest.com/docview/2694063496 https://www.proquest.com/docview/2695291453 https://pubmed.ncbi.nlm.nih.gov/PMC9324997 https://doaj.org/article/faaf52ec5d9942e48e060ccf3c0a2e41 |
| Volume | 22 |
| WOSCitedRecordID | wos000831895200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals customDbUrl: eissn: 1424-8220 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0023338 issn: 1424-8220 databaseCode: DOA dateStart: 20010101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources customDbUrl: eissn: 1424-8220 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0023338 issn: 1424-8220 databaseCode: M~E dateStart: 20010101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre – providerCode: PRVPQU databaseName: Health & Medical Collection (ProQuest) customDbUrl: eissn: 1424-8220 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0023338 issn: 1424-8220 databaseCode: 7X7 dateStart: 20010101 isFulltext: true titleUrlDefault: https://search.proquest.com/healthcomplete providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest Central customDbUrl: eissn: 1424-8220 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0023338 issn: 1424-8220 databaseCode: BENPR dateStart: 20010101 isFulltext: true titleUrlDefault: https://www.proquest.com/central providerName: ProQuest – providerCode: PRVPQU databaseName: Publicly Available Content Database customDbUrl: eissn: 1424-8220 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0023338 issn: 1424-8220 databaseCode: PIMPY dateStart: 20010101 isFulltext: true titleUrlDefault: http://search.proquest.com/publiccontent providerName: ProQuest |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1La9wwEB7SpIfmUNIXcZsubumhUExsPSwN5NIkGxpIlqWPsD0ZWZbaJV0n7G567G_PyPYuayj00ouwHhh5Ro_vw5pPAO9S76WVNHi1FiYR3GGCBvOkLJnVRubomgi5qws1GunJBMcbV32FM2GtPHBruENvjJfMWVkhCuaEdmmeWuu5TQ1lG-KTKlyRqY5qcWJerY4QJ1J_uGBBkJtnrLf7NCL9PWTZPxe5sdGc7cHjDiHGH9uePYEtVz-F3Q3dwGdwREjRE3aMT527vTDlFf8QE_iMv7jZNDk18-v4fEbrRMiT4aaWHn7MuiCj-jl8Oxt-PfmUdNcgJJa42jLJlOJOCCtcqnJlmPHeVJh57aRGwz0xQCwRLRGzUubGcl0R68QyK7OKC6n4C9iub2q3DzGtspIpeoEM5_Nsrj0jOOIEF95XqGwE71fmKWynER6uqvhVEFcIlizWlozg7brpbSuM8bdGx8HG6wZBy7opIA8XnYeLf3k4goOVh4pugi2KEIBL6EpgHsGbdTVNjfC_w9Tu5q5pIxmGnkSgep7tdahfU09_NiLbhGuJDKqX_-MLXsEjFqImgh6nPIDt5fzOvYaH9vdyupgP4IGaqCbVA9g5Ho7GnwfNaKb08s-Qysbnl-Pv90T6-8w |
| linkProvider | Directory of Open Access Journals |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1tb9MwED6NgQRD4mWAljEgIJCQULTEL7EtgdCgTKtWKiS2qd-C49ijgqZd24H4U_xGzmnSNRLi2z7wLY5PiRM_ubsn9t0BPI-d44YjeKVkOmLUqkhplUZ5TozUPFW2ipA76Yl-Xw4G6tMa_G5iYfy2ykYnVoq6GBv_j3zXR1yiOWUqfTs5i3zVKL-62pTQWMDi0P76iZRt9qbbwfl9Qcj-h6P3B1FdVSAySH3mUSIEtYwZZpHCC020c7pQiZOWS6WpQ0KlcqUM8pycp9pQWSCJU3mSJwVlXFC87hW4inpceLInBhcEjyLfW2QvolTFuzPi04DThLRsXlUaoOXPtndjrpi3_dv_24u5A7dqRzrcWyD_LqzZchNu7q2si2zCxkq2xXvwGv1rhx532LF20tP5CX0VossefrajYdTR029hd4Ta1bcRbkODB6ejOjSrvA_Hl_I0D2C9HJd2C0K0TZwIvAD3uxpNKh3OfWoZZc4VSpgAXjbTm5k6s7ov8PE9Q4blkZAtkRDAs6XoZJFO5G9C7zxGlgI-A3h1Yjw9zWqFkjmtHSfW8EIpRiyTNk5jYxw1scZmEsBOA5esVkuz7AIrATxddqNC8atEurTj80qGE-VHEoBoIbM1oHZPOfxapSZHNoAUWmz_--ZP4PrB0cde1uv2Dx_CDeIjSnyuUr4D6_PpuX0E18yP-XA2fVx9XyF8uWzc_gF5x2NQ |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1tb9MwED6NDiFA4mUMERgQEEhIKGril8SWQGhQKqqVqhJsGp8yx7FHBU1L24H4a_w6zmkSGgnxbR_4Fscnx4kfn--JfXcAT0JrueYIXiGYChg1MpBKxkGWES0Uj6UpPeSOhsloJI6P5XgLftW-MO5YZa0TS0Wdz7T7R951Hpe4nDIZd211LGLc67-afwtcBim301qn01hD5MD8_IH0bfly0MOxfkpI_-3HN--CKsNAoJEGrYIoSahhTDODdD5RRFmrchlZYbiQilokVzKTUiPnyXisNBU5EjqZRVmUU8YTiu1egG00yRnpwPZ48H78qaF7FNnfOpYRpTLsLokLCk4j0loBy0QBLeu2fTZzY7HrX_-fP9MNuFaZ2P7-ek7chC1T7MDV_Y0dkx24shGH8Ra8QMvboi3u94yZD1V2RJ_7aMz7H8x0EvTU4os_mKLedWUE4kTjxem0ctoqduHwXN7mNnSKWWHugI-rFicJNsDdeUcdC0vQvDOMMmtzmWgPntVDneoq5rpL_fE1Re7lUJE2qPDgcSM6Xwca-ZvQa4eXRsDFBi9vzBanaaVqUquU5cRonkvJiGHChHGotaU6VFiMPNiroZNWCmuZ_sGNB4-aalQ1bv9IFWZ2VspwIl1PPEhaKG11qF1TTD6XQcuRJyC5Tu7---EP4RLCNR0ORgf34DJxriYuiCnfg85qcWbuw0X9fTVZLh5Uk82Hk_MG7m-ZU22f |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Unified+DeepLabV3%2B+for+Semi-Dark+Image+Semantic+Segmentation&rft.jtitle=Sensors+%28Basel%2C+Switzerland%29&rft.au=Memon%2C+Mehak+Maqbool&rft.au=Hashmani%2C+Manzoor+Ahmed&rft.au=Junejo%2C+Aisha+Zahid&rft.au=Rizvi%2C+Syed+Sajjad&rft.date=2022-07-15&rft.issn=1424-8220&rft.eissn=1424-8220&rft.volume=22&rft.issue=14&rft_id=info:doi/10.3390%2Fs22145312&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1424-8220&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1424-8220&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1424-8220&client=summon |