Unified DeepLabV3+ for Semi-Dark Image Semantic Segmentation

Semantic segmentation for accurate visual perception is a critical task in computer vision. In principle, the automatic classification of dynamic visual scenes using predefined object classes remains unresolved. The challenging problems of learning deep convolution neural networks, specifically ResN...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Sensors (Basel, Switzerland) Ročník 22; číslo 14; s. 5312
Hlavní autoři: Memon, Mehak Maqbool, Hashmani, Manzoor Ahmed, Junejo, Aisha Zahid, Rizvi, Syed Sajjad, Raza, Kamran
Médium: Journal Article
Jazyk:angličtina
Vydáno: Basel MDPI AG 15.07.2022
MDPI
Témata:
ISSN:1424-8220, 1424-8220
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Semantic segmentation for accurate visual perception is a critical task in computer vision. In principle, the automatic classification of dynamic visual scenes using predefined object classes remains unresolved. The challenging problems of learning deep convolution neural networks, specifically ResNet-based DeepLabV3+ (the most recent version), are threefold. The problems arise due to (1) biased centric exploitations of filter masks, (2) lower representational power of residual networks due to identity shortcuts, and (3) a loss of spatial relationship by using per-pixel primitives. To solve these problems, we present a proficient approach based on DeepLabV3+, along with an added evaluation metric, namely, Unified DeepLabV3+ and S3core, respectively. The presented unified version reduced the effect of biased exploitations via additional dilated convolution layers with customized dilation rates. We further tackled the problem of representational power by introducing non-linear group normalization shortcuts to solve the focused problem of semi-dark images. Meanwhile, to keep track of the spatial relationships in terms of the global and local contexts, geometrically bunched pixel cues were used. We accumulated all the proposed variants of DeepLabV3+ to propose Unified DeepLabV3+ for accurate visual decisions. Finally, the proposed S3core evaluation metric was based on the weighted combination of three different accuracy measures, i.e., the pixel accuracy, IoU (intersection over union), and Mean BFScore, as robust identification criteria. Extensive experimental analysis performed over a CamVid dataset confirmed the applicability of the proposed solution for autonomous vehicles and robotics for outdoor settings. The experimental analysis showed that the proposed Unified DeepLabV3+ outperformed DeepLabV3+ by a margin of 3% in terms of the class-wise pixel accuracy, along with a higher S3core, depicting the effectiveness of the proposed approach.
AbstractList Semantic segmentation for accurate visual perception is a critical task in computer vision. In principle, the automatic classification of dynamic visual scenes using predefined object classes remains unresolved. The challenging problems of learning deep convolution neural networks, specifically ResNet-based DeepLabV3+ (the most recent version), are threefold. The problems arise due to (1) biased centric exploitations of filter masks, (2) lower representational power of residual networks due to identity shortcuts, and (3) a loss of spatial relationship by using per-pixel primitives. To solve these problems, we present a proficient approach based on DeepLabV3+, along with an added evaluation metric, namely, Unified DeepLabV3+ and S3core, respectively. The presented unified version reduced the effect of biased exploitations via additional dilated convolution layers with customized dilation rates. We further tackled the problem of representational power by introducing non-linear group normalization shortcuts to solve the focused problem of semi-dark images. Meanwhile, to keep track of the spatial relationships in terms of the global and local contexts, geometrically bunched pixel cues were used. We accumulated all the proposed variants of DeepLabV3+ to propose Unified DeepLabV3+ for accurate visual decisions. Finally, the proposed S3core evaluation metric was based on the weighted combination of three different accuracy measures, i.e., the pixel accuracy, IoU (intersection over union), and Mean BFScore, as robust identification criteria. Extensive experimental analysis performed over a CamVid dataset confirmed the applicability of the proposed solution for autonomous vehicles and robotics for outdoor settings. The experimental analysis showed that the proposed Unified DeepLabV3+ outperformed DeepLabV3+ by a margin of 3% in terms of the class-wise pixel accuracy, along with a higher S3core, depicting the effectiveness of the proposed approach.
Semantic segmentation for accurate visual perception is a critical task in computer vision. In principle, the automatic classification of dynamic visual scenes using predefined object classes remains unresolved. The challenging problems of learning deep convolution neural networks, specifically ResNet-based DeepLabV3+ (the most recent version), are threefold. The problems arise due to (1) biased centric exploitations of filter masks, (2) lower representational power of residual networks due to identity shortcuts, and (3) a loss of spatial relationship by using per-pixel primitives. To solve these problems, we present a proficient approach based on DeepLabV3+, along with an added evaluation metric, namely, Unified DeepLabV3+ and S3core, respectively. The presented unified version reduced the effect of biased exploitations via additional dilated convolution layers with customized dilation rates. We further tackled the problem of representational power by introducing non-linear group normalization shortcuts to solve the focused problem of semi-dark images. Meanwhile, to keep track of the spatial relationships in terms of the global and local contexts, geometrically bunched pixel cues were used. We accumulated all the proposed variants of DeepLabV3+ to propose Unified DeepLabV3+ for accurate visual decisions. Finally, the proposed S3core evaluation metric was based on the weighted combination of three different accuracy measures, i.e., the pixel accuracy, IoU (intersection over union), and Mean BFScore, as robust identification criteria. Extensive experimental analysis performed over a CamVid dataset confirmed the applicability of the proposed solution for autonomous vehicles and robotics for outdoor settings. The experimental analysis showed that the proposed Unified DeepLabV3+ outperformed DeepLabV3+ by a margin of 3% in terms of the class-wise pixel accuracy, along with a higher S3core, depicting the effectiveness of the proposed approach.Semantic segmentation for accurate visual perception is a critical task in computer vision. In principle, the automatic classification of dynamic visual scenes using predefined object classes remains unresolved. The challenging problems of learning deep convolution neural networks, specifically ResNet-based DeepLabV3+ (the most recent version), are threefold. The problems arise due to (1) biased centric exploitations of filter masks, (2) lower representational power of residual networks due to identity shortcuts, and (3) a loss of spatial relationship by using per-pixel primitives. To solve these problems, we present a proficient approach based on DeepLabV3+, along with an added evaluation metric, namely, Unified DeepLabV3+ and S3core, respectively. The presented unified version reduced the effect of biased exploitations via additional dilated convolution layers with customized dilation rates. We further tackled the problem of representational power by introducing non-linear group normalization shortcuts to solve the focused problem of semi-dark images. Meanwhile, to keep track of the spatial relationships in terms of the global and local contexts, geometrically bunched pixel cues were used. We accumulated all the proposed variants of DeepLabV3+ to propose Unified DeepLabV3+ for accurate visual decisions. Finally, the proposed S3core evaluation metric was based on the weighted combination of three different accuracy measures, i.e., the pixel accuracy, IoU (intersection over union), and Mean BFScore, as robust identification criteria. Extensive experimental analysis performed over a CamVid dataset confirmed the applicability of the proposed solution for autonomous vehicles and robotics for outdoor settings. The experimental analysis showed that the proposed Unified DeepLabV3+ outperformed DeepLabV3+ by a margin of 3% in terms of the class-wise pixel accuracy, along with a higher S3core, depicting the effectiveness of the proposed approach.
Author Rizvi, Syed Sajjad
Junejo, Aisha Zahid
Memon, Mehak Maqbool
Hashmani, Manzoor Ahmed
Raza, Kamran
AuthorAffiliation 2 Department of Computer Science, Shaheed Zulfiqar Ali Bhutto Institute of Science and Technology, Karachi 75600, Pakistan; sshussainr@gmail.com
3 Faculty of Engineering Science and Technology, Iqra University, Karachi 75600, Pakistan; kraza@iqra.edu.pk
1 High Performance Cloud Computing Center (HPC3), Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia; mehak_19001057@utp.edu.my (M.M.M.); aisha_19001022@utp.edu.my (A.Z.J.)
AuthorAffiliation_xml – name: 1 High Performance Cloud Computing Center (HPC3), Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia; mehak_19001057@utp.edu.my (M.M.M.); aisha_19001022@utp.edu.my (A.Z.J.)
– name: 3 Faculty of Engineering Science and Technology, Iqra University, Karachi 75600, Pakistan; kraza@iqra.edu.pk
– name: 2 Department of Computer Science, Shaheed Zulfiqar Ali Bhutto Institute of Science and Technology, Karachi 75600, Pakistan; sshussainr@gmail.com
Author_xml – sequence: 1
  givenname: Mehak Maqbool
  orcidid: 0000-0003-3921-0104
  surname: Memon
  fullname: Memon, Mehak Maqbool
– sequence: 2
  givenname: Manzoor Ahmed
  orcidid: 0000-0002-6617-8149
  surname: Hashmani
  fullname: Hashmani, Manzoor Ahmed
– sequence: 3
  givenname: Aisha Zahid
  orcidid: 0000-0001-9815-2704
  surname: Junejo
  fullname: Junejo, Aisha Zahid
– sequence: 4
  givenname: Syed Sajjad
  surname: Rizvi
  fullname: Rizvi, Syed Sajjad
– sequence: 5
  givenname: Kamran
  surname: Raza
  fullname: Raza, Kamran
BookMark eNplkclqHDEQhoWx8ZYc_AYDviSEjrX1IggGYyfOwEAOiXMV1erSWONuaSz1BPz2Vmec4OWkKumrv35VHZFdHzwScsLoZyEUPUucM1kKxnfIIZNcFg3ndPdZfECOUlpRyoUQzT45EGWjqFL8kHy58c467GZXiOsFtL_Fp5kNcfYTB1dcQbybzQdY4pSDH53JwXJAP8Logn9H9iz0Cd8_ncfk5tvXX5ffi8WP6_nlxaIwUlZjwepaoJRGIq2rGjhYC51itsFsA4SVlKtWKVOzui0rMKLpGlGrlrWsE7KsxTGZb3W7ACu9jm6A-KADOP33IsSlhpjN9agtgC05mrJTSnKUDdKKGmOFoZBTlrXOt1rrTTtgZ_JfIvQvRF--eHerl-GPVoJLpSYzH54EYrjfYBr14JLBvgePYZM0r1TJ1bSPjJ6-QldhE30e1URJWgmpqkydbSkTQ0oRrTZuO9_c3_WaUT1tWf_fcq74-Krin_237CNvlaVO
CitedBy_id crossref_primary_10_1016_j_bspc_2024_106691
crossref_primary_10_3390_s23198338
crossref_primary_10_1111_cas_16394
crossref_primary_10_1016_j_compag_2023_107875
Cites_doi 10.1109/CVPR.2015.7298655
10.1016/j.neucom.2019.01.016
10.1007/978-3-642-15555-0_26
10.3390/rs14051128
10.24963/ijcai.2017/661
10.1007/978-3-030-01261-8_1
10.3390/app8050837
10.1109/ICCV.2015.203
10.1109/ICCV.2015.179
10.1109/TPAMI.2012.231
10.1109/TPAMI.2016.2644615
10.1007/978-3-540-88690-7_3
10.1109/CVPR.2017.757
10.1109/LRA.2019.2896518
10.1007/978-3-540-74496-2_36
10.1109/CVPR.2016.350
10.1109/TPAMI.2017.2699184
10.5244/C.27.32
10.1145/1553374.1553479
10.1109/ICCV.2009.5459211
10.1007/978-3-030-01234-2_49
10.1109/CVPR.2016.396
10.1109/CVPR.2016.90
10.1109/IVS.2018.8500497
10.1007/978-3-319-10602-1_48
10.3390/rs13010119
10.1007/978-3-030-75847-9_8
10.1109/5.726791
10.1109/CVPR.2013.263
10.1109/CVPR.2018.00347
10.1016/j.neucom.2019.02.003
10.1007/978-3-540-88682-2_5
10.1016/j.patrec.2008.04.005
10.1007/s11263-014-0733-5
10.1109/CVPR.2013.447
10.3390/app11188694
ContentType Journal Article
Copyright 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
2022 by the authors. 2022
Copyright_xml – notice: 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: 2022 by the authors. 2022
DBID AAYXX
CITATION
3V.
7X7
7XB
88E
8FI
8FJ
8FK
ABUWG
AFKRA
AZQEC
BENPR
CCPQU
DWQXO
FYUFA
GHDGH
K9.
M0S
M1P
PHGZM
PHGZT
PIMPY
PJZUB
PKEHL
PPXIY
PQEST
PQQKQ
PQUKI
PRINS
7X8
5PM
DOA
DOI 10.3390/s22145312
DatabaseName CrossRef
ProQuest Central (Corporate)
Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Medical Database (Alumni Edition)
Hospital Premium Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials
ProQuest Central
ProQuest One Community College
ProQuest Central
Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Health & Medical Complete (Alumni)
Health & Medical Collection (Alumni Edition)
PML(ProQuest Medical Library)
ProQuest Central Premium
ProQuest One Academic
Publicly Available Content Database
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
ProQuest One Community College
ProQuest One Health & Nursing
ProQuest Central China
ProQuest Central
ProQuest Health & Medical Research Collection
Health Research Premium Collection
Health and Medicine Complete (Alumni Edition)
ProQuest Central Korea
Health & Medical Research Collection
ProQuest Central (New)
ProQuest Medical Library (Alumni)
ProQuest One Academic Eastern Edition
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
ProQuest Hospital Collection (Alumni)
ProQuest Health & Medical Complete
ProQuest Medical Library
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList Publicly Available Content Database
MEDLINE - Academic
CrossRef


Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: PIMPY
  name: Publicly Available Content Database
  url: http://search.proquest.com/publiccontent
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Architecture
EISSN 1424-8220
ExternalDocumentID oai_doaj_org_article_faaf52ec5d9942e48e060ccf3c0a2e41
PMC9324997
10_3390_s22145312
GrantInformation_xml – fundername: Iqra University, Pakistan
– fundername: Universiti Teknologi PETRONAS (UTP), Malaysia
  grantid: 015MEO-227
GroupedDBID ---
123
2WC
53G
5VS
7X7
88E
8FE
8FG
8FI
8FJ
AADQD
AAHBH
AAYXX
ABDBF
ABUWG
ACUHS
ADBBV
ADMLS
AENEX
AFFHD
AFKRA
AFZYC
ALMA_UNASSIGNED_HOLDINGS
BENPR
BPHCQ
BVXVI
CCPQU
CITATION
CS3
D1I
DU5
E3Z
EBD
ESX
F5P
FYUFA
GROUPED_DOAJ
GX1
HH5
HMCUK
HYE
IAO
ITC
KQ8
L6V
M1P
M48
MODMG
M~E
OK1
OVT
P2P
P62
PHGZM
PHGZT
PIMPY
PJZUB
PPXIY
PQQKQ
PROAC
PSQYO
RNS
RPM
TUS
UKHRP
XSB
~8M
3V.
7XB
8FK
AZQEC
DWQXO
K9.
PKEHL
PQEST
PQUKI
PRINS
7X8
PUEGO
5PM
ID FETCH-LOGICAL-c446t-1773e44c4e0767a2affad91f8e589a3f4029b99c717b56ac38d8379b1b1d34573
IEDL.DBID 7X7
ISICitedReferencesCount 6
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000831895200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1424-8220
IngestDate Fri Oct 03 12:53:18 EDT 2025
Tue Nov 04 01:59:09 EST 2025
Thu Oct 02 07:44:39 EDT 2025
Tue Oct 07 07:08:05 EDT 2025
Sat Nov 29 07:16:16 EST 2025
Tue Nov 18 20:58:05 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 14
Language English
License Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c446t-1773e44c4e0767a2affad91f8e589a3f4029b99c717b56ac38d8379b1b1d34573
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0003-3921-0104
0000-0002-6617-8149
0000-0001-9815-2704
OpenAccessLink https://www.proquest.com/docview/2694063496?pq-origsite=%requestingapplication%
PMID 35890992
PQID 2694063496
PQPubID 2032333
ParticipantIDs doaj_primary_oai_doaj_org_article_faaf52ec5d9942e48e060ccf3c0a2e41
pubmedcentral_primary_oai_pubmedcentral_nih_gov_9324997
proquest_miscellaneous_2695291453
proquest_journals_2694063496
crossref_citationtrail_10_3390_s22145312
crossref_primary_10_3390_s22145312
PublicationCentury 2000
PublicationDate 20220715
PublicationDateYYYYMMDD 2022-07-15
PublicationDate_xml – month: 7
  year: 2022
  text: 20220715
  day: 15
PublicationDecade 2020
PublicationPlace Basel
PublicationPlace_xml – name: Basel
PublicationTitle Sensors (Basel, Switzerland)
PublicationYear 2022
Publisher MDPI AG
MDPI
Publisher_xml – name: MDPI AG
– name: MDPI
References Farabet (ref_28) 2012; 35
ref_14
ref_36
ref_13
ref_12
ref_34
ref_33
ref_32
ref_31
ref_30
ref_19
ref_18
ref_39
ref_16
ref_38
ref_15
ref_37
Zhou (ref_35) 2019; 340
Chen (ref_10) 2018; 40
Badrinarayanan (ref_11) 2017; 39
Zhou (ref_17) 2019; 4
Brostow (ref_25) 2009; 30
Lateef (ref_8) 2019; 338
Everingham (ref_22) 2015; 111
ref_24
ref_23
ref_44
ref_21
ref_43
ref_20
ref_42
ref_41
ref_40
ref_1
ref_3
ref_2
ref_29
LeCun (ref_7) 1998; 86
ref_27
ref_26
ref_9
ref_5
ref_4
ref_6
References_xml – ident: ref_9
– ident: ref_26
  doi: 10.1109/CVPR.2015.7298655
– volume: 340
  start-page: 196
  year: 2019
  ident: ref_35
  article-title: Superpixel based continuous conditional random field neural network for semantic segmentation
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2019.01.016
– ident: ref_30
  doi: 10.1007/978-3-642-15555-0_26
– ident: ref_4
  doi: 10.3390/rs14051128
– ident: ref_16
  doi: 10.24963/ijcai.2017/661
– ident: ref_39
  doi: 10.1007/978-3-030-01261-8_1
– ident: ref_40
– ident: ref_18
  doi: 10.3390/app8050837
– ident: ref_32
  doi: 10.1109/ICCV.2015.203
– ident: ref_19
  doi: 10.1109/ICCV.2015.179
– volume: 35
  start-page: 1915
  year: 2012
  ident: ref_28
  article-title: Learning hierarchical features for scene labeling
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2012.231
– volume: 39
  start-page: 2481
  year: 2017
  ident: ref_11
  article-title: Segnet: A deep convolutional encoder-decoder architecture for image segmentation
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2016.2644615
– ident: ref_29
  doi: 10.1007/978-3-540-88690-7_3
– ident: ref_21
– ident: ref_34
  doi: 10.1109/CVPR.2017.757
– volume: 4
  start-page: 1792
  year: 2019
  ident: ref_17
  article-title: Normalization in training U-Net for 2-D biomedical semantic segmentation
  publication-title: IEEE Robot. Autom. Lett.
  doi: 10.1109/LRA.2019.2896518
– ident: ref_44
  doi: 10.1007/978-3-540-74496-2_36
– ident: ref_24
  doi: 10.1109/CVPR.2016.350
– volume: 40
  start-page: 834
  year: 2018
  ident: ref_10
  article-title: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2017.2699184
– ident: ref_42
  doi: 10.5244/C.27.32
– ident: ref_20
  doi: 10.1145/1553374.1553479
– ident: ref_31
  doi: 10.1109/ICCV.2009.5459211
– ident: ref_33
– ident: ref_14
  doi: 10.1007/978-3-030-01234-2_49
– ident: ref_27
– ident: ref_6
  doi: 10.1109/CVPR.2016.396
– ident: ref_12
  doi: 10.1109/CVPR.2016.90
– ident: ref_43
  doi: 10.1109/IVS.2018.8500497
– ident: ref_23
  doi: 10.1007/978-3-319-10602-1_48
– ident: ref_3
  doi: 10.3390/rs13010119
– ident: ref_5
  doi: 10.1007/978-3-030-75847-9_8
– volume: 86
  start-page: 2278
  year: 1998
  ident: ref_7
  article-title: Gradient-based learning applied to document recognition
  publication-title: Proc. IEEE
  doi: 10.1109/5.726791
– ident: ref_36
  doi: 10.1109/CVPR.2013.263
– ident: ref_2
  doi: 10.1109/CVPR.2018.00347
– volume: 338
  start-page: 321
  year: 2019
  ident: ref_8
  article-title: Survey on semantic segmentation using deep learning techniques
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2019.02.003
– ident: ref_15
– ident: ref_41
  doi: 10.1007/978-3-540-88682-2_5
– ident: ref_13
– volume: 30
  start-page: 88
  year: 2009
  ident: ref_25
  article-title: Semantic object classes in video: A high-definition ground truth database
  publication-title: Pattern Recognit. Lett.
  doi: 10.1016/j.patrec.2008.04.005
– volume: 111
  start-page: 98
  year: 2015
  ident: ref_22
  article-title: The pascal visual object classes challenge: A retrospective
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-014-0733-5
– ident: ref_38
– ident: ref_37
  doi: 10.1109/CVPR.2013.447
– ident: ref_1
  doi: 10.3390/app11188694
SSID ssj0023338
Score 2.4427307
Snippet Semantic segmentation for accurate visual perception is a critical task in computer vision. In principle, the automatic classification of dynamic visual scenes...
SourceID doaj
pubmedcentral
proquest
crossref
SourceType Open Website
Open Access Repository
Aggregation Database
Enrichment Source
Index Database
StartPage 5312
SubjectTerms Architecture
atrous convolutions
Automation
Computer vision
high-resolution images
Neural networks
semantic segmentation
Semantics
super-pixels
urban environments
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3PS8MwFA4iHvQg_sTplCoeBClr82NpwIs6RUGGoI7dykua6NB1sk3_fl_abqwgePHWNK_w-l7TfF-b94WQU6YponpJQ428K0T-FYc6oSxUHISKMpyQi485vQfZ7Sb9vnpc2OrLrwkr5YHLwLUcgBPUGpEpxanliY3akTGOmQiwWRCfSKoZmaqoFkPmVeoIMST1rQn1gtwsprXZpxDpryHL-rrIhYnmdoOsVwgxuCw92yRLNt8iawu6gdvkApGiQ-wYdKz9fADdY-cBgs_gyQ4HYQfG78H9EN8Tvo2BGxg8eB1WRUb5Dnm5vXm-vgurbRBCg1xtGsZSMsu54TaSbQkUnINMxS6xIlHAHDJApZUySMy0aINhSYasU-lYxxnjQrJdspyPcrtHAmFBabACQGRcUw1tF-N1mday0GVpkLNZeFJTaYT7rSo-UuQKPpLpPJINcjI3_SyFMX4zuvIxnht4LeviBGY4rTKc_pXhBmnOMpRWA2yS-gJcRFdcoc_H824cGv5_B-R29FXYCKq8Jw0ia5mtOVTvyQdvhcg24lokg3L_P-7ggKxSXzXh9ThFkyxPx1_2kKyY7-lgMj4qntwfM6n0GQ
  priority: 102
  providerName: Directory of Open Access Journals
Title Unified DeepLabV3+ for Semi-Dark Image Semantic Segmentation
URI https://www.proquest.com/docview/2694063496
https://www.proquest.com/docview/2695291453
https://pubmed.ncbi.nlm.nih.gov/PMC9324997
https://doaj.org/article/faaf52ec5d9942e48e060ccf3c0a2e41
Volume 22
WOSCitedRecordID wos000831895200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: DOA
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: M~E
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
– providerCode: PRVPQU
  databaseName: Health & Medical Collection
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: 7X7
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/healthcomplete
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: BENPR
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Publicly Available Content Database
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: PIMPY
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/publiccontent
  providerName: ProQuest
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LaxsxEB7apIc20Eeakm1Ssy09FIqIVw9rBYGSNA4NJMb0EdzTImml1DReO7bTY397R_La8ULppRexeiw77Iyk-fT4BuAtMxS9ekmJQdxFEH9lxOSUEcW1UO0SJ-S4mHN5Lnu9fDBQ_XrBbVYfq1yOiXGgLsc2rJEfhBuXOJ1y1fkwuSEhalTYXa1DaNyHzRA2O9i5HNwBLob4a8EmxBDaH8xooOVmGW3MQZGqv-FfNk9Hrk03p0_-V9Cn8Lh2NNOjhWU8g3uu2oato7V9g214tMZG-BwO0f_06JGmJ85NzrW5ZO9TdGnTL240JCd6-jM9G-HoE_KojqHFh6tRfXWp2oFvp92vHz-ROrgCsYgA5ySTkjnOLXdt2ZGaau91qTKfO5ErzTziSmWUsgj3jOhoy_ISsawymclKxoVkL2CjGlduF1LhtDLaCa1FyQ01uuMzfK80Rka2lwTeLX93YWvm8RAA47pABBI0U6w0k8CbVdPJgm7jb42Og85WDQJDdiwYT6-KusMVXmsvqLOiVIpTx3OHsljrmW1rzGYJ7C_VV9Tddlbc6S6B16tq7HBhF0VXbnwb2wiqgiQJyIalNARq1lTDH5G6G71lhJjy5b8_vgcPabhlEfg7xT5szKe37hU8sL_mw9m0FW08pnkLNo-7vf7nVlxKwPTidxfL-mcX_e9_AP3TDHA
linkProvider ProQuest
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1bb9MwFD7aBhIXicsAEdggIJCQkLXGlzqWQGijTKtWKiS2qW_BduxRQdPSdiD-FL-R4zTpGgnxtoe9xbGTOPGX4_P58h2AF8xQ9OolJQZ5F0H-lRCTUkYU10K1cuyQy8Gck57s99PBQH1agz_1XpiwrLK2iaWhzsc2jJHvhB2X2J1y1X43-UFC1Kgwu1qH0FjA4tD9_oWUbfa228H2fUnp_oej9wekiipALFKfOUmkZI5zyx1SeKmp9l7nKvGpE6nSzCOhUkYpizzHiLa2LM2RxCmTmCRnXEiG912HK2jHZSB7cnBO8BjyvYV6EWOqtTOjQQacJbTR55WhARr-bHM15kr3tn_7sn2YO3CrcqTj3QXy78KaKzbh5u7KvMgm3FhRW7wHb9C_9uhxxx3nJj1tTtjrGF32-LMbDUlHT7_F3RFa15BGuA0tHpyOqq1ZxX04vpC3eQAbxbhwDyEWTiujndBa5NxQo9s-wetyY2SpZhPBq7p5M1spq4cAH98zZFgBCdkSCRE8XxadLORE_lVoL2BkWSAogJcnxtPTrDIomdfaC-qsyJXi1PHUYV2s9cy2NCaTCLZquGSVWZpl51iJ4NkyGw1KmCXShRuflWUEVaEmEcgGMhsVauYUw6-lNDmyAaTQ8tH_H_4Urh0cfexlvW7_8DFcp2FHSdAqFVuwMZ-euW24an_Oh7Ppk_L_iuHLReP2L4czYek
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1bb9MwFD4aHUKAxGUMERgQEEhIyGrjSxNLIDQoFdVKVQk2jafMduxRQdPSdiD-Gr-O4zQJjYR42wNvcewkTvzl-Hy-fAfgCdMUvfqYEo28iyD_iohOKCOSKyE7GXbIxWDO0TAejZLjYznegl_VXhi_rLKyiYWhzmbGj5G3_Y5L7E657LZduSxi3Ou_mn8jPoKUn2mtwmmsIXJgf_5A-rZ8OehhWz-ltP_245t3pIwwQAzSoBWJ4phZzg23SOdjRZVzKpORS6xIpGIOyZXUUhrkPFp0lWFJhoRO6khHGeMiZnjfC7CNLjmnLdgeD96PP9V0jyH7W2sZMSY77SX1ouAsoo0esAgU0PBum2szNzq7_vX_-TPdgGulix3ur_-Jm7Bl8x24ur8xY7IDVzZ0GG_BC_S8HfriYc_a-VDpI_Y8RGc-_GCnE9JTiy_hYIp216cRiBODB6fTctNWvguH5_I2t6GVz3J7B0JhldTKCqVExjXVqusivC7TOi50bgJ4VjV1akrNdR_642uK3MujIq1REcDjuuh8LTTyt0KvPV7qAl4bvDgxW5ympalJnVJOUGtEJiWnlicW62KMY6ajMBkFsFdBJy0N1jL9g5sAHtXZaGr8_JHK7eysKCOo9DUJIG6gtFGhZk4--VyIliNPQHId3_33wx_CJYRrOhyMDu7BZeq3mngRU7EHrdXizN6Hi-b7arJcPCh_thBOzhu4vwGFwmw4
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Unified+DeepLabV3%2B+for+Semi-Dark+Image+Semantic+Segmentation&rft.jtitle=Sensors+%28Basel%2C+Switzerland%29&rft.au=Memon%2C+Mehak+Maqbool&rft.au=Hashmani%2C+Manzoor+Ahmed&rft.au=Junejo%2C+Aisha+Zahid&rft.au=Rizvi%2C+Syed+Sajjad&rft.date=2022-07-15&rft.pub=MDPI&rft.eissn=1424-8220&rft.volume=22&rft.issue=14&rft_id=info:doi/10.3390%2Fs22145312&rft_id=info%3Apmid%2F35890992&rft.externalDocID=PMC9324997
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1424-8220&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1424-8220&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1424-8220&client=summon