Collaborative Content-Dependent Modeling: A Return to the Roots of Salient Object Detection

Salient object detection (SOD) aims to identify the most visually distinctive object(s) from each given image. Most recent progresses focus on either adding elaborative connections among different convolution blocks or introducing boundary-aware supervision to help achieve better segmentation, which...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on image processing Ročník 32; s. 4237 - 4246
Hlavní autori: Jiao, Siyu, Goel, Vidit, Navasardyan, Shant, Yang, Zongxin, Khachatryan, Levon, Yang, Yi, Wei, Yunchao, Zhao, Yao, Shi, Humphrey
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: United States IEEE 2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:1057-7149, 1941-0042, 1941-0042
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Salient object detection (SOD) aims to identify the most visually distinctive object(s) from each given image. Most recent progresses focus on either adding elaborative connections among different convolution blocks or introducing boundary-aware supervision to help achieve better segmentation, which is actually moving away from the essence of SOD, i.e., distinctiveness/salience. This paper goes back to the roots of SOD and investigates the principles of how to identify distinctive object(s) in a more effective and efficient way. Intuitively, the salience of one object should largely depend on its global context within the input image. Based on this, we devise a clean yet effective architecture for SOD, named Collaborative Content-Dependent Networks (CCD-Net). In detail, we propose a collaborative content-dependent head whose parameters are conditioned on the input image's global context information. Within the content-dependent head, a hand-crafted multi-scale (HMS) module and a self-induced (SI) module are carefully designed to collaboratively generate content-aware convolution kernels for prediction. Benefited from the content-dependent head, CCD-Net is capable of leveraging global context to detect distinctive object(s) while keeping a simple encoder-decoder design. Extensive experimental results demonstrate that our CCD-Net achieves state-of-the-art results on various benchmarks. Our architecture is simple and intuitive compared to previous solutions, resulting in competitive characteristics with respect to model complexity, operating efficiency, and segmentation accuracy.
AbstractList Salient object detection (SOD) aims to identify the most visually distinctive object(s) from each given image. Most recent progresses focus on either adding elaborative connections among different convolution blocks or introducing boundary-aware supervision to help achieve better segmentation, which is actually moving away from the essence of SOD, i.e., distinctiveness/salience. This paper goes back to the roots of SOD and investigates the principles of how to identify distinctive object(s) in a more effective and efficient way. Intuitively, the salience of one object should largely depend on its global context within the input image. Based on this, we devise a clean yet effective architecture for SOD, named Collaborative Content-Dependent Networks (CCD-Net). In detail, we propose a collaborative content-dependent head whose parameters are conditioned on the input image's global context information. Within the content-dependent head, a hand-crafted multi-scale (HMS) module and a self-induced (SI) module are carefully designed to collaboratively generate content-aware convolution kernels for prediction. Benefited from the content-dependent head, CCD-Net is capable of leveraging global context to detect distinctive object(s) while keeping a simple encoder-decoder design. Extensive experimental results demonstrate that our CCD-Net achieves state-of-the-art results on various benchmarks. Our architecture is simple and intuitive compared to previous solutions, resulting in competitive characteristics with respect to model complexity, operating efficiency, and segmentation accuracy.
Salient object detection (SOD) aims to identify the most visually distinctive object(s) from each given image. Most recent progresses focus on either adding elaborative connections among different convolution blocks or introducing boundary-aware supervision to help achieve better segmentation, which is actually moving away from the essence of SOD, i.e., distinctiveness/salience. This paper goes back to the roots of SOD and investigates the principles of how to identify distinctive object(s) in a more effective and efficient way. Intuitively, the salience of one object should largely depend on its global context within the input image. Based on this, we devise a clean yet effective architecture for SOD, named Collaborative Content-Dependent Networks (CCD-Net). In detail, we propose a collaborative content-dependent head whose parameters are conditioned on the input image's global context information. Within the content-dependent head, a hand-crafted multi-scale (HMS) module and a self-induced (SI) module are carefully designed to collaboratively generate content-aware convolution kernels for prediction. Benefited from the content-dependent head, CCD-Net is capable of leveraging global context to detect distinctive object(s) while keeping a simple encoder-decoder design. Extensive experimental results demonstrate that our CCD-Net achieves state-of-the-art results on various benchmarks. Our architecture is simple and intuitive compared to previous solutions, resulting in competitive characteristics with respect to model complexity, operating efficiency, and segmentation accuracy.Salient object detection (SOD) aims to identify the most visually distinctive object(s) from each given image. Most recent progresses focus on either adding elaborative connections among different convolution blocks or introducing boundary-aware supervision to help achieve better segmentation, which is actually moving away from the essence of SOD, i.e., distinctiveness/salience. This paper goes back to the roots of SOD and investigates the principles of how to identify distinctive object(s) in a more effective and efficient way. Intuitively, the salience of one object should largely depend on its global context within the input image. Based on this, we devise a clean yet effective architecture for SOD, named Collaborative Content-Dependent Networks (CCD-Net). In detail, we propose a collaborative content-dependent head whose parameters are conditioned on the input image's global context information. Within the content-dependent head, a hand-crafted multi-scale (HMS) module and a self-induced (SI) module are carefully designed to collaboratively generate content-aware convolution kernels for prediction. Benefited from the content-dependent head, CCD-Net is capable of leveraging global context to detect distinctive object(s) while keeping a simple encoder-decoder design. Extensive experimental results demonstrate that our CCD-Net achieves state-of-the-art results on various benchmarks. Our architecture is simple and intuitive compared to previous solutions, resulting in competitive characteristics with respect to model complexity, operating efficiency, and segmentation accuracy.
Author Jiao, Siyu
Wei, Yunchao
Goel, Vidit
Yang, Zongxin
Navasardyan, Shant
Zhao, Yao
Shi, Humphrey
Khachatryan, Levon
Yang, Yi
Author_xml – sequence: 1
  givenname: Siyu
  orcidid: 0000-0002-0795-8401
  surname: Jiao
  fullname: Jiao, Siyu
  email: jiaosiyu@bjtu.edu.cn
  organization: Beijing Key Laboratory of Advanced Information Science and Network and the Institute of Information Science, Beijing Jiaotong University, Beijing, China
– sequence: 2
  givenname: Vidit
  orcidid: 0000-0001-8483-6363
  surname: Goel
  fullname: Goel, Vidit
  email: vidit.goel@picsart.com
  organization: Picsart AI Research (PAIR), Atlanta, GA, USA
– sequence: 3
  givenname: Shant
  surname: Navasardyan
  fullname: Navasardyan, Shant
  email: shant.navasardyan@picsart.com
  organization: PAIR, Yerevan, Armenia
– sequence: 4
  givenname: Zongxin
  surname: Yang
  fullname: Yang, Zongxin
  email: zongxinyang1996@gmail.com
  organization: College of Computer Science and Technology, Zhejiang University, Zhejiang, China
– sequence: 5
  givenname: Levon
  orcidid: 0000-0002-5840-760X
  surname: Khachatryan
  fullname: Khachatryan, Levon
  email: levon.khachatryan@picsart.com
  organization: PAIR, Yerevan, Armenia
– sequence: 6
  givenname: Yi
  orcidid: 0000-0002-0512-880X
  surname: Yang
  fullname: Yang, Yi
  email: yee.i.yang@gmail.com
  organization: College of Computer Science and Technology, Zhejiang University, Zhejiang, China
– sequence: 7
  givenname: Yunchao
  orcidid: 0000-0002-2812-8781
  surname: Wei
  fullname: Wei, Yunchao
  email: yunchao.wei@bjtu.edu.cn
  organization: Beijing Key Laboratory of Advanced Information Science and Network and the Institute of Information Science, Beijing Jiaotong University, Beijing, China
– sequence: 8
  givenname: Yao
  orcidid: 0000-0002-8581-9554
  surname: Zhao
  fullname: Zhao, Yao
  email: yzhao@bjtu.edu.cn
  organization: Beijing Key Laboratory of Advanced Information Science and Network and the Institute of Information Science, Beijing Jiaotong University, Beijing, China
– sequence: 9
  givenname: Humphrey
  orcidid: 0000-0002-2922-5663
  surname: Shi
  fullname: Shi, Humphrey
  email: humphrey.shi@picsart.com
  organization: Picsart AI Research (PAIR), Atlanta, GA, USA
BackLink https://www.ncbi.nlm.nih.gov/pubmed/37440395$$D View this record in MEDLINE/PubMed
BookMark eNp9kb1vFDEQxS0URD6gp0DIEg3NHmOPd23TRRcSIgUFhVBRrLy7s-DTnn1Z-5D47_HpDoRSUM0Uvzczb94pOwoxEGMvBSyEAPvu_vrzQoLEBUqLurZP2ImwSlQASh6VHmpdaaHsMTtNaQUgVC2aZ-wYtVKAtj5h35ZxmlwXZ5f9T-LLGDKFXF3QhsJQOv4pDjT58P09P-d3lLdz4Dny_IP4XYw58TjyL27yO_S2W1Gf-QXlUnwMz9nT0U2JXhzqGft6-eF--bG6ub26Xp7fVD0qkytVGzfaoTFgrBuh69AItAAgyfTaNqrT1hgxDCTNiE70WqPsBuMskSYc8Iy93c_dzPFhSym3a596Kr4CxW1qpUEj6xpkU9A3j9BVLJbKdYVSKJRsFBbq9YHadmsa2s3s127-1f55WwFgD_RzTGmm8S8ioN0l05Zk2l0y7SGZImkeSXqf3e5NeXZ--p_w1V7oieifPaK4whp_A4ytmOw
CODEN IIPRE4
CitedBy_id crossref_primary_10_1109_TSMC_2025_3572123
crossref_primary_10_1016_j_displa_2024_102855
crossref_primary_10_1007_s11042_024_19876_4
crossref_primary_10_1109_TITS_2023_3342811
crossref_primary_10_1109_TIM_2025_3574913
Cites_doi 10.1609/aaai.v35i3.16331
10.1109/ICCV48922.2021.00413
10.1007/978-3-030-58536-5_3
10.1007/978-3-642-33712-3_3
10.1109/TNNLS.2023.3243246
10.1109/ICCV.2019.00887
10.1109/TIP.2022.3164550
10.1109/TIP.2021.3072811
10.1109/CVPR.2019.00403
10.1109/CVPR.2015.7298731
10.1109/CVPR.2019.00766
10.1109/TPAMI.2021.3051099
10.1109/CVPR42600.2020.00943
10.1109/CVPR.2017.404
10.24963/ijcai.2018/95
10.1109/CVPR.2016.90
10.1609/aaai.v34i07.6633
10.1109/ICCV48922.2021.01196
10.1109/CVPR.2018.00187
10.1109/ICRA.2013.6630857
10.1109/CVPR.2013.153
10.1109/CVPR.2018.00330
10.1109/TGRS.2014.2357078
10.1109/TIP.2022.3220058
10.1109/CVPR.2014.43
10.1109/CVPR.2019.00404
10.24963/ijcai.2018/97
10.1109/CVPR.2019.00320
10.1109/TPAMI.2021.3140168
10.1109/CVPR46437.2021.01508
10.1109/CVPR.2016.58
10.1109/CVPR.2018.00326
10.1145/3394171.3413855
10.1109/CVPR42600.2020.00916
10.1109/ICCV48922.2021.00158
10.1109/TPAMI.2014.2353617
10.1109/CVPR.2013.407
10.1109/ICCV48922.2021.00468
10.1007/978-3-030-58452-8_17
10.1109/CVPR46437.2021.00405
10.1109/CVPR.2019.00172
10.1109/ICCV.2019.00390
10.1109/CVPR42600.2020.01304
10.1109/TPAMI.2014.2345401
10.1109/TIP.2021.3108412
10.1109/TIP.2019.2910377
10.1109/ICCV.2017.31
10.1109/ICCV48922.2021.00986
10.1007/s11432-021-3384-y
10.1109/CVPR.2009.5206596
10.1109/CVPR.2018.00445
10.1609/aaai.v35i4.16408
10.1609/aaai.v34i07.6916
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TIP.2023.3293759
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList
Technology Research Database
PubMed
MEDLINE - Academic
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Engineering
EISSN 1941-0042
EndPage 4246
ExternalDocumentID 37440395
10_1109_TIP_2023_3293759
10183835
Genre orig-research
Journal Article
GrantInformation_xml – fundername: Fundamental Research Funds for the Central Universities
  grantid: K22RC00010
  funderid: 10.13039/501100012226
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
RIA
RIE
RNS
TAE
TN5
VH1
AAYXX
CITATION
AAYOK
NPM
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c348t-458af9d68089af0bb381390002e8c7964b79881dde28f3a1c7732bd8a9ee7e3d3
IEDL.DBID RIE
ISICitedReferencesCount 4
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001040104200003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1057-7149
1941-0042
IngestDate Sun Sep 28 08:06:44 EDT 2025
Mon Jun 30 08:26:26 EDT 2025
Sun Apr 06 01:21:18 EDT 2025
Sat Nov 29 03:34:42 EST 2025
Tue Nov 18 22:18:25 EST 2025
Wed Aug 27 02:25:55 EDT 2025
IsPeerReviewed true
IsScholarly true
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c348t-458af9d68089af0bb381390002e8c7964b79881dde28f3a1c7732bd8a9ee7e3d3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0001-8483-6363
0000-0002-8581-9554
0000-0002-2812-8781
0000-0002-0795-8401
0000-0002-0512-880X
0000-0002-2922-5663
0000-0002-5840-760X
PMID 37440395
PQID 2843142643
PQPubID 85429
PageCount 10
ParticipantIDs pubmed_primary_37440395
proquest_journals_2843142643
crossref_citationtrail_10_1109_TIP_2023_3293759
proquest_miscellaneous_2838255026
crossref_primary_10_1109_TIP_2023_3293759
ieee_primary_10183835
PublicationCentury 2000
PublicationDate 20230000
2023-00-00
20230101
PublicationDateYYYYMMDD 2023-01-01
PublicationDate_xml – year: 2023
  text: 20230000
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on image processing
PublicationTitleAbbrev TIP
PublicationTitleAlternate IEEE Trans Image Process
PublicationYear 2023
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref12
ref15
ref58
ref11
ref10
ref54
ref17
ref16
ref18
loshchilov (ref60) 2017
yang (ref44) 2019; 32
hassani (ref51) 2021
xu (ref3) 2015
ref46
li (ref19) 2015
ref48
ref47
xie (ref53) 2021; 34
ref42
jia (ref43) 2016; 29
ref41
vaswani (ref50) 2017
yang (ref55) 2021; 34
ref49
zhuge (ref34) 2023; 45
ref8
ref7
ref9
ref4
ref6
ref5
ref40
liu (ref36) 2022; 44
ref35
ref37
yang (ref56) 2022; 44
ref31
ref30
ref33
ref32
ref2
ref1
ref39
qi (ref45) 2023; 45
ref38
ref70
ref24
ref68
ref23
ref67
ref26
ha (ref13) 2016
ref25
ref69
ref20
ref64
ref63
ref22
ref66
ref21
ref65
zhang (ref14) 2021; 34
ref28
ref27
ref29
carion (ref52) 2020
ref62
liu (ref59) 2021
ref61
References_xml – year: 2021
  ident: ref51
  article-title: Escaping the big data paradigm with compact transformers
  publication-title: arXiv 2104 05704
– volume: 45
  start-page: 3738
  year: 2023
  ident: ref34
  article-title: Salient object detection via integrity learning
  publication-title: IEEE Trans Pattern Anal Mach Intell
– ident: ref24
  doi: 10.1609/aaai.v35i3.16331
– ident: ref48
  doi: 10.1109/ICCV48922.2021.00413
– ident: ref7
  doi: 10.1007/978-3-030-58536-5_3
– volume: 44
  start-page: 4701
  year: 2022
  ident: ref56
  article-title: Collaborative video object segmentation by multi-scale foreground-background integration
  publication-title: IEEE Trans Pattern Anal Mach Intell
– year: 2016
  ident: ref13
  article-title: HyperNetworks
  publication-title: arXiv 1609 09106
– ident: ref23
  doi: 10.1007/978-3-642-33712-3_3
– year: 2021
  ident: ref59
  article-title: Polarized self-attention: Towards high-quality pixel-wise regression
  publication-title: arXiv 2107 00782
– ident: ref57
  doi: 10.1109/TNNLS.2023.3243246
– start-page: 5998
  year: 2017
  ident: ref50
  article-title: Attention is all you need
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref9
  doi: 10.1109/ICCV.2019.00887
– ident: ref31
  doi: 10.1109/TIP.2022.3164550
– ident: ref67
  doi: 10.1109/TIP.2021.3072811
– ident: ref66
  doi: 10.1109/CVPR.2019.00403
– volume: 34
  start-page: 2491
  year: 2021
  ident: ref55
  article-title: Associating objects with transformers for video object segmentation
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref25
  doi: 10.1109/CVPR.2015.7298731
– ident: ref29
  doi: 10.1109/CVPR.2019.00766
– volume: 32
  start-page: 1
  year: 2019
  ident: ref44
  article-title: CondConv: Conditionally parameterized convolutions for efficient inference
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref21
  doi: 10.1109/TPAMI.2021.3051099
– ident: ref6
  doi: 10.1109/CVPR42600.2020.00943
– ident: ref16
  doi: 10.1109/CVPR.2017.404
– start-page: 2048
  year: 2015
  ident: ref3
  article-title: Show, attend and tell: Neural image caption generation with visual attention
  publication-title: Proc Int Conf Mach Learn
– ident: ref33
  doi: 10.24963/ijcai.2018/95
– ident: ref61
  doi: 10.1109/CVPR.2016.90
– ident: ref8
  doi: 10.1609/aaai.v34i07.6633
– ident: ref54
  doi: 10.1109/ICCV48922.2021.01196
– volume: 34
  start-page: 17388
  year: 2021
  ident: ref14
  article-title: Dynamic neural representational decoders for high-resolution semantic segmentation
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref28
  doi: 10.1109/CVPR.2018.00187
– ident: ref4
  doi: 10.1109/ICRA.2013.6630857
– ident: ref17
  doi: 10.1109/CVPR.2013.153
– ident: ref42
  doi: 10.1109/CVPR.2018.00330
– ident: ref2
  doi: 10.1109/TGRS.2014.2357078
– ident: ref32
  doi: 10.1109/TIP.2022.3220058
– year: 2017
  ident: ref60
  article-title: Decoupled weight decay regularization
  publication-title: arXiv 1711 05101
– ident: ref18
  doi: 10.1109/CVPR.2014.43
– ident: ref30
  doi: 10.1109/CVPR.2019.00404
– ident: ref64
  doi: 10.24963/ijcai.2018/97
– ident: ref37
  doi: 10.1109/CVPR.2019.00320
– ident: ref68
  doi: 10.1109/TPAMI.2021.3140168
– ident: ref70
  doi: 10.1109/CVPR46437.2021.01508
– ident: ref26
  doi: 10.1109/CVPR.2016.58
– volume: 44
  start-page: 8321
  year: 2022
  ident: ref36
  article-title: Instance-level relative saliency ranking with graph reasoning
  publication-title: IEEE Trans Pattern Anal Mach Intell
– ident: ref27
  doi: 10.1109/CVPR.2018.00326
– ident: ref40
  doi: 10.1145/3394171.3413855
– start-page: 5455
  year: 2015
  ident: ref19
  article-title: Visual saliency based on multiscale deep features
  publication-title: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR)
– ident: ref12
  doi: 10.1109/CVPR42600.2020.00916
– ident: ref49
  doi: 10.1109/ICCV48922.2021.00158
– ident: ref1
  doi: 10.1109/TPAMI.2014.2353617
– ident: ref20
  doi: 10.1109/CVPR.2013.407
– ident: ref58
  doi: 10.1109/ICCV48922.2021.00468
– ident: ref15
  doi: 10.1007/978-3-030-58452-8_17
– ident: ref47
  doi: 10.1109/CVPR46437.2021.00405
– ident: ref39
  doi: 10.1109/CVPR.2019.00172
– start-page: 213
  year: 2020
  ident: ref52
  article-title: End-to-end object detection with transformers
  publication-title: Proc Eur Conf Comput Vis
– ident: ref65
  doi: 10.1109/ICCV.2019.00390
– ident: ref11
  doi: 10.1109/CVPR42600.2020.01304
– ident: ref22
  doi: 10.1109/TPAMI.2014.2345401
– ident: ref10
  doi: 10.1109/TIP.2021.3108412
– ident: ref35
  doi: 10.1109/TIP.2019.2910377
– ident: ref38
  doi: 10.1109/ICCV.2017.31
– ident: ref62
  doi: 10.1109/ICCV48922.2021.00986
– ident: ref69
  doi: 10.1007/s11432-021-3384-y
– ident: ref63
  doi: 10.1109/CVPR.2009.5206596
– ident: ref46
  doi: 10.1109/CVPR.2018.00445
– volume: 45
  start-page: 8743
  year: 2023
  ident: ref45
  article-title: Open world entity segmentation
  publication-title: IEEE Trans Pattern Anal Mach Intell
– ident: ref41
  doi: 10.1609/aaai.v35i4.16408
– volume: 34
  start-page: 12077
  year: 2021
  ident: ref53
  article-title: SegFormer: Simple and efficient design for semantic segmentation with transformers
  publication-title: Proc Adv Neural Inf Process Syst
– volume: 29
  start-page: 667
  year: 2016
  ident: ref43
  article-title: Dynamic filter networks
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref5
  doi: 10.1609/aaai.v34i07.6916
SSID ssj0014516
Score 2.437433
Snippet Salient object detection (SOD) aims to identify the most visually distinctive object(s) from each given image. Most recent progresses focus on either adding...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 4237
SubjectTerms Coders
Collaboration
content-dependent modeling
Context
Convolution
Decoding
Encoders-Decoders
Feature extraction
Head
Image segmentation
Modules
Object detection
Object recognition
Roots
Salience
Salient object detection
Task analysis
Transformers
Title Collaborative Content-Dependent Modeling: A Return to the Roots of Salient Object Detection
URI https://ieeexplore.ieee.org/document/10183835
https://www.ncbi.nlm.nih.gov/pubmed/37440395
https://www.proquest.com/docview/2843142643
https://www.proquest.com/docview/2838255026
Volume 32
WOSCitedRecordID wos001040104200003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1941-0042
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014516
  issn: 1057-7149
  databaseCode: RIE
  dateStart: 19920101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1La9wwEB6akEN7yKtp4zYNCuSSg3dty15JvYU8aKGkIUnLQg9GlsdQCHbIevP7OyNrN3tJITeDx7bQjDyjGc33ARzrKjeprnWMBidcZkxi65osntSFsVluk9wn3H7_UFdXejo116FZ3ffCIKI_fIYjvvS1_Lpzc06VjRldinZUxRqsKTUZmrWWJQNmnPWlzULFiuL-RU0yMeO779cjpgkfSXJuqmCkUMnAeJJZJVbckedXeTnU9C7ncuuVg92GzRBbitPBGHbgDba7sBXiTBFW8WwX3q2AEL6HP2fPpvCEwsNVtX18Hthxe8F0ady0_lWcihskF9WKvhMUOIqbrutnomvELUXzLPqz4rSOOMfen_Bq9-DX5cXd2bc4UC7ETua6j_NC28bUzMdhbJNUFTl0ybyiGWrHXasV45ul9E_MdCNt6pSSWVVraxAVylp-gPW2a3EfRE4WgBNZ56l2OTZ1ZRPEypFTzmiuZBrBeDHzpQt45EyLcV_6fUliSlJbyWorg9oiOFk-8TBgcfxHdo9VsiI3aCOCg4V2y7BaZyW5aJlyaCgjOFrepnXGxRPbYjdnGUmb6YK2rBF8HKxi-fKFMX164aOf4S2PbcjcHMB6_zjHL7Dhnvq_s8dDMuapPvTG_A_afe1u
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3fT9swED4NhgR74GdhYd1mpL3wkDaJk8bmDcEqEF1BrCAkHiLHuUhIKJloyt8_n-OWvjBpb5FySSzfOXe-830fwA-RxzIUhfBR4oDKjIGvdBn5gyKRKopVENuE2_0oHY_Fw4O8cc3qthcGEe3hM-zRpa3lF7WeUaqsT-hSZkeVrMBHos5y7VqLogFxztriZpL6qYn851XJQPYnlzc9IgrvcePe0oSwQjlB43HilVhySJZh5f1g0zqd4dZ_DncbNl10yU5bc9iBD1jtwpaLNJlbx9Nd-LQEQ7gHj2dvxvCKzAJWVY1_7vhxG0aEadS2fsJO2S0aJ1WxpmYmdGS3dd1MWV2y3yaeJ9HrnBI77Bwbe8ar6sDd8Ofk7MJ3pAu-5rFo_DgRqpQFMXJIVQZ5blw6J2bRCIWmvtWcEM5C81eMRMlVqNOUR3khlERMkRd8H1arusLPwGJjAzjgRRwKHWNZ5CpAzLVxy5GZKx560J_PfKYdIjkRYzxndmcSyMyoLSO1ZU5tHhwvnvjTonH8Q7ZDKlmSa7XhQXeu3cyt12lmnDQPKTjkHhwtbpuVRuUTVWE9IxluttOJ2bR6cNBaxeLlc2M6fOej32H9YvJrlI0ux1dfYIPG2eZxurDavMzwK6zp1-Zp-vLNmvRfxnnvzw
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Collaborative+Content-Dependent+Modeling%3A+A+Return+to+the+Roots+of+Salient+Object+Detection&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Jiao%2C+Siyu&rft.au=Goel%2C+Vidit&rft.au=Navasardyan%2C+Shant&rft.au=Yang%2C+Zongxin&rft.date=2023&rft.pub=IEEE&rft.issn=1057-7149&rft.volume=32&rft.spage=4237&rft.epage=4246&rft_id=info:doi/10.1109%2FTIP.2023.3293759&rft_id=info%3Apmid%2F37440395&rft.externalDocID=10183835
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon