FADE: A Task-Agnostic Upsampling Operator for Encoder–Decoder Architectures

The goal of this work is to develop a task-agnostic feature upsampling operator for dense prediction where the operator is required to facilitate not only region-sensitive tasks like semantic segmentation but also detail-sensitive tasks such as image matting. Prior upsampling operators often can wor...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:International journal of computer vision Ročník 133; číslo 1; s. 151 - 172
Hlavní autoři: Lu, Hao, Liu, Wenze, Fu, Hongtao, Cao, Zhiguo
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Springer US 01.01.2025
Springer Nature B.V
Témata:
ISSN:0920-5691, 1573-1405
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract The goal of this work is to develop a task-agnostic feature upsampling operator for dense prediction where the operator is required to facilitate not only region-sensitive tasks like semantic segmentation but also detail-sensitive tasks such as image matting. Prior upsampling operators often can work well in either type of the tasks, but not both. We argue that task-agnostic upsampling should dynamically trade off between semantic preservation and detail delineation, instead of having a bias between the two properties. In this paper, we present FADE, a novel, plug-and-play, lightweight, and task-agnostic upsampling operator by fusing the assets of decoder and encoder features at three levels: (i) considering both the encoder and decoder feature in upsampling kernel generation; (ii) controlling the per-point contribution of the encoder/decoder feature in upsampling kernels with an efficient semi-shift convolutional operator; and (iii) enabling the selective pass of encoder features with a decoder-dependent gating mechanism for compensating details. To improve the practicality of FADE, we additionally study parameter- and memory-efficient implementations of semi-shift convolution. We analyze the upsampling behavior of FADE on toy data and show through large-scale experiments that FADE is task-agnostic with consistent performance improvement on a number of dense prediction tasks with little extra cost. For the first time, we demonstrate robust feature upsampling on both region- and detail-sensitive tasks successfully. Code is made available at: https://github.com/poppinace/fade
AbstractList The goal of this work is to develop a task-agnostic feature upsampling operator for dense prediction where the operator is required to facilitate not only region-sensitive tasks like semantic segmentation but also detail-sensitive tasks such as image matting. Prior upsampling operators often can work well in either type of the tasks, but not both. We argue that task-agnostic upsampling should dynamically trade off between semantic preservation and detail delineation, instead of having a bias between the two properties. In this paper, we present FADE, a novel, plug-and-play, lightweight, and task-agnostic upsampling operator by fusing the assets of decoder and encoder features at three levels: (i) considering both the encoder and decoder feature in upsampling kernel generation; (ii) controlling the per-point contribution of the encoder/decoder feature in upsampling kernels with an efficient semi-shift convolutional operator; and (iii) enabling the selective pass of encoder features with a decoder-dependent gating mechanism for compensating details. To improve the practicality of FADE, we additionally study parameter- and memory-efficient implementations of semi-shift convolution. We analyze the upsampling behavior of FADE on toy data and show through large-scale experiments that FADE is task-agnostic with consistent performance improvement on a number of dense prediction tasks with little extra cost. For the first time, we demonstrate robust feature upsampling on both region- and detail-sensitive tasks successfully. Code is made available at: https://github.com/poppinace/fade
The goal of this work is to develop a task-agnostic feature upsampling operator for dense prediction where the operator is required to facilitate not only region-sensitive tasks like semantic segmentation but also detail-sensitive tasks such as image matting. Prior upsampling operators often can work well in either type of the tasks, but not both. We argue that task-agnostic upsampling should dynamically trade off between semantic preservation and detail delineation, instead of having a bias between the two properties. In this paper, we present FADE, a novel, plug-and-play, lightweight, and task-agnostic upsampling operator by fusing the assets of decoder and encoder features at three levels: (i) considering both the encoder and decoder feature in upsampling kernel generation; (ii) controlling the per-point contribution of the encoder/decoder feature in upsampling kernels with an efficient semi-shift convolutional operator; and (iii) enabling the selective pass of encoder features with a decoder-dependent gating mechanism for compensating details. To improve the practicality of FADE, we additionally study parameter- and memory-efficient implementations of semi-shift convolution. We analyze the upsampling behavior of FADE on toy data and show through large-scale experiments that FADE is task-agnostic with consistent performance improvement on a number of dense prediction tasks with little extra cost. For the first time, we demonstrate robust feature upsampling on both region- and detail-sensitive tasks successfully. Code is made available at: https://github.com/poppinace/fade
Author Liu, Wenze
Fu, Hongtao
Lu, Hao
Cao, Zhiguo
Author_xml – sequence: 1
  givenname: Hao
  orcidid: 0000-0003-3854-8664
  surname: Lu
  fullname: Lu, Hao
  organization: The Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
– sequence: 2
  givenname: Wenze
  orcidid: 0000-0002-1510-6196
  surname: Liu
  fullname: Liu, Wenze
  organization: The Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
– sequence: 3
  givenname: Hongtao
  orcidid: 0000-0002-6692-0913
  surname: Fu
  fullname: Fu, Hongtao
  organization: The Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
– sequence: 4
  givenname: Zhiguo
  orcidid: 0000-0002-9223-1863
  surname: Cao
  fullname: Cao, Zhiguo
  email: zgcao@hust.edu.cn
  organization: The Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
BookMark eNp9kL1OwzAUhS1UJNrCCzBFYjb42nHisEX9AaSiLu1sOa5dUlon2O7AxjvwhjwJoUVCYuhwdO9wvnuuzgD1XOMMQtdAboGQ_C4A0IxhQtNOUAAWZ6gPPGcYUsJ7qE8KSjDPCrhAgxA2hBAqKOuj52k5ntwnZbJQ4RWXa9eEWOtk2Qa1a7e1Wyfz1ngVG5_YThOnm5XxXx-fY3PYktLrlzoaHffehEt0btU2mKvfOUTL6WQxesSz-cPTqJxhzaCIuKLMpoVWjIPmdsWBaqEqC5ynilYsh0JVjGVArS1IakWaV4VSOs0yBZnIKRuim-Pd1jdvexOi3DR777pIyYBTxigI3rno0aV9E4I3Vra-3in_LoHIn9rksTbZ1SYPtUnRQeIfpOuoYt246FW9PY2yIxq6HLc2_u-rE9Q3FxKDDQ
CitedBy_id crossref_primary_10_1016_j_asoc_2025_113643
crossref_primary_10_1371_journal_pone_0332931
crossref_primary_10_3390_rs17081438
crossref_primary_10_3390_agronomy14112734
crossref_primary_10_3390_s25165200
crossref_primary_10_3390_agriculture14122324
Cites_doi 10.1007/978-3-031-19812-0_14
10.1109/ICCV51070.2023.01140
10.1109/CVPR52688.2022.01179
10.1109/ICCV.2017.322
10.1109/CVPR.2018.00040
10.1007/978-3-030-58520-4_26
10.1109/CVPR.2017.41
10.1109/CVPR42600.2020.00982
10.1007/978-3-642-15549-9_1
10.1145/3355089.3356528
10.1109/CVPRW53098.2021.00286
10.1109/CVPR.2015.7298655
10.3115/v1/W14-4012
10.1109/CVPR46437.2021.01371
10.1109/CVPR.2016.90
10.1109/CVPR.2017.106
10.1109/CVPR.2018.00813
10.1109/TPAMI.2020.2983686
10.1109/TGRS.2022.3228927
10.1109/CVPR46437.2021.01508
10.1109/CVPR.2014.81
10.1007/s11263-009-0275-4
10.1109/CVPR.2015.7298965
10.1007/978-3-319-24574-4_28
10.1109/ICCV51070.2023.00371
10.1109/CVPR.2017.549
10.1109/CVPR.2019.00324
10.1007/978-3-030-58536-5_24
10.1109/CVPR.2016.207
10.1109/ICCV.2015.164
10.1109/ICCV.2019.00336
10.1109/CVPR52688.2022.01580
10.1007/978-3-642-33715-4_54
10.1109/ICCV.2019.00310
10.1109/TPAMI.2015.2439281
10.1109/CVPR.2017.660
10.1007/978-3-030-01234-2_49
10.1007/3-540-47967-8_8
10.1109/ICCV.2019.00533
10.1109/CVPR46437.2021.00677
10.1109/CVPR.2017.544
10.1109/CVPR52688.2022.01042
10.1109/ICCV.1998.710815
10.1109/TPAMI.2016.2644615
10.1109/ICCV48922.2021.00951
10.1609/aaai.v34i07.6805
10.1109/CVPR.2016.319
10.1007/978-3-319-10602-1_48
10.1007/978-3-319-10590-1_53
10.1007/s11263-021-01465-9
10.1109/CVPR.2009.5206503
10.1007/978-3-030-58452-8_45
10.1109/CVPR.2018.00913
10.1109/ICCV48922.2021.00090
10.1109/CVPR.2018.00591
10.1007/978-3-030-01228-1_26
10.1109/TPAMI.2020.3004474
10.1109/CVPR46437.2021.00681
10.1007/978-3-030-58568-6_39
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024 Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Copyright Springer Nature B.V. Jan 2025
Copyright_xml – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024 Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
– notice: Copyright Springer Nature B.V. Jan 2025
DBID AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1007/s11263-024-02191-8
DatabaseName CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Computer and Information Systems Abstracts
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Computer Science
EISSN 1573-1405
EndPage 172
ExternalDocumentID 10_1007_s11263_024_02191_8
GrantInformation_xml – fundername: National Natural Science Fundation of China
  grantid: 62106080
GroupedDBID -4Z
-59
-5G
-BR
-EM
-Y2
-~C
.4S
.86
.DC
.VR
06D
0R~
0VY
199
1N0
1SB
2.D
203
28-
29J
2J2
2JN
2JY
2KG
2KM
2LR
2P1
2VQ
2~H
30V
3V.
4.4
406
408
409
40D
40E
5GY
5QI
5VS
67Z
6NX
6TJ
78A
7WY
8FE
8FG
8FL
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDBF
ABDZT
ABECU
ABFTD
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABUWG
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFO
ACGFS
ACHSB
ACHXU
ACIHN
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACREN
ACUHS
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADMLS
ADRFC
ADTPH
ADURQ
ADYFF
ADYOE
ADZKW
AEAQA
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFYQB
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMTXH
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARCSS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
AZQEC
B-.
B0M
BA0
BBWZM
BDATZ
BENPR
BEZIV
BGLVJ
BGNMA
BPHCQ
BSONS
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
DWQXO
EAD
EAP
EAS
EBLON
EBS
EDO
EIOEI
EJD
EMK
EPL
ESBYG
ESX
F5P
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRNLG
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNUQQ
GNWQR
GQ6
GQ7
GQ8
GROUPED_ABI_INFORM_COMPLETE
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I-F
I09
IAO
IHE
IJ-
IKXTQ
ISR
ITC
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Y
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K60
K6V
K6~
K7-
KDC
KOV
KOW
LAK
LLZTM
M0C
M0N
M4Y
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
OVD
P19
P2P
P62
P9O
PF0
PQBIZ
PQBZA
PQQKQ
PROAC
PT4
PT5
QF4
QM1
QN7
QO4
QOK
QOS
R4E
R89
R9I
RHV
RNI
RNS
ROL
RPX
RSV
RZC
RZE
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TAE
TEORI
TSG
TSK
TSV
TUC
TUS
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
Z7R
Z7S
Z7V
Z7W
Z7X
Z7Y
Z7Z
Z83
Z86
Z88
Z8M
Z8N
Z8P
Z8Q
Z8R
Z8S
Z8T
Z8W
Z92
ZMTXR
~8M
~EX
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ABRTQ
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFFHD
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
ICD
PHGZM
PHGZT
PQGLB
7SC
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c319t-b23f49ca351c5fd512c8abf1554a2b3719ab33612ff904f847b9aac466a168723
IEDL.DBID RSV
ISICitedReferencesCount 6
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001273975600001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0920-5691
IngestDate Wed Nov 05 08:26:28 EST 2025
Sat Nov 29 06:42:31 EST 2025
Tue Nov 18 20:42:56 EST 2025
Fri Feb 21 02:35:05 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords Instance segmentation
Feature upsampling
Semantic segmentation
Object detection
Image matting
Dense prediction
Depth estimation
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c319t-b23f49ca351c5fd512c8abf1554a2b3719ab33612ff904f847b9aac466a168723
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-9223-1863
0000-0002-6692-0913
0000-0003-3854-8664
0000-0002-1510-6196
PQID 3152332185
PQPubID 1456341
PageCount 22
ParticipantIDs proquest_journals_3152332185
crossref_primary_10_1007_s11263_024_02191_8
crossref_citationtrail_10_1007_s11263_024_02191_8
springer_journals_10_1007_s11263_024_02191_8
PublicationCentury 2000
PublicationDate 20250100
2025-01-00
20250101
PublicationDateYYYYMMDD 2025-01-01
PublicationDate_xml – month: 1
  year: 2025
  text: 20250100
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle International journal of computer vision
PublicationTitleAbbrev Int J Comput Vis
PublicationYear 2025
Publisher Springer US
Springer Nature B.V
Publisher_xml – name: Springer US
– name: Springer Nature B.V
References 2191_CR72
2191_CR70
2191_CR71
2191_CR32
2191_CR30
2191_CR31
2191_CR36
Y Yuan (2191_CR66) 2021; 129
2191_CR37
2191_CR34
2191_CR35
S Niklaus (2191_CR38) 2019; 38
2191_CR61
2191_CR62
2191_CR60
2191_CR21
2191_CR65
2191_CR22
2191_CR63
2191_CR20
A Odena (2191_CR39) 2016; 1
2191_CR64
2191_CR69
2191_CR26
H Lu (2191_CR33) 2022; 44
2191_CR23
2191_CR67
2191_CR68
2191_CR29
2191_CR9
2191_CR8
2191_CR27
2191_CR7
2191_CR28
J Wu (2191_CR59) 2022; 60
J Wang (2191_CR57) 2021; 44
2191_CR51
V Badrinarayanan (2191_CR1) 2017; 39
2191_CR54
2191_CR11
2191_CR55
2191_CR52
2191_CR53
2191_CR14
2191_CR58
2191_CR15
2191_CR13
2191_CR18
2191_CR19
2191_CR16
2191_CR17
2191_CR6
2191_CR5
2191_CR4
2191_CR3
2191_CR2
C Dong (2191_CR10) 2015; 38
X Li (2191_CR24) 2020; 34
2191_CR40
X Li (2191_CR25) 2023; 132
2191_CR43
2191_CR44
2191_CR41
2191_CR42
2191_CR47
2191_CR48
2191_CR45
2191_CR46
M Tan (2191_CR50) 2019; 97
2191_CR49
M Everingham (2191_CR12) 2010; 88
J Wang (2191_CR56) 2020; 43
References_xml – ident: 2191_CR34
  doi: 10.1007/978-3-031-19812-0_14
– ident: 2191_CR41
– ident: 2191_CR30
  doi: 10.1109/ICCV51070.2023.01140
– ident: 2191_CR68
  doi: 10.1109/CVPR52688.2022.01179
– ident: 2191_CR16
  doi: 10.1109/ICCV.2017.322
– ident: 2191_CR60
  doi: 10.1109/CVPR.2018.00040
– ident: 2191_CR22
  doi: 10.1007/978-3-030-58520-4_26
– ident: 2191_CR35
– ident: 2191_CR65
  doi: 10.1109/CVPR.2017.41
– ident: 2191_CR19
  doi: 10.1109/CVPR42600.2020.00982
– ident: 2191_CR14
  doi: 10.1007/978-3-642-15549-9_1
– volume: 38
  start-page: 1
  issue: 6
  year: 2019
  ident: 2191_CR38
  publication-title: ACM Trans Graph
  doi: 10.1145/3355089.3356528
– ident: 2191_CR18
  doi: 10.1109/CVPRW53098.2021.00286
– ident: 2191_CR21
– ident: 2191_CR48
  doi: 10.1109/CVPR.2015.7298655
– ident: 2191_CR8
  doi: 10.3115/v1/W14-4012
– ident: 2191_CR51
  doi: 10.1109/CVPR46437.2021.01371
– ident: 2191_CR61
– ident: 2191_CR15
  doi: 10.1109/CVPR.2016.90
– ident: 2191_CR63
– ident: 2191_CR28
  doi: 10.1109/CVPR.2017.106
– ident: 2191_CR58
  doi: 10.1109/CVPR.2018.00813
– ident: 2191_CR42
– volume: 43
  start-page: 3349
  issue: 10
  year: 2020
  ident: 2191_CR56
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2020.2983686
– volume: 60
  start-page: 1
  year: 2022
  ident: 2191_CR59
  publication-title: Transactions on Geoscience and Remote Sensing
  doi: 10.1109/TGRS.2022.3228927
– ident: 2191_CR6
  doi: 10.1109/CVPR46437.2021.01508
– ident: 2191_CR13
  doi: 10.1109/CVPR.2014.81
– volume: 44
  start-page: 4674
  issue: 9
  year: 2021
  ident: 2191_CR57
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
– volume: 88
  start-page: 303
  issue: 2
  year: 2010
  ident: 2191_CR12
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-009-0275-4
– ident: 2191_CR31
  doi: 10.1109/CVPR.2015.7298965
– ident: 2191_CR45
  doi: 10.1007/978-3-319-24574-4_28
– ident: 2191_CR20
  doi: 10.1109/ICCV51070.2023.00371
– ident: 2191_CR26
  doi: 10.1109/CVPR.2017.549
– ident: 2191_CR53
  doi: 10.1109/CVPR.2019.00324
– ident: 2191_CR52
  doi: 10.1007/978-3-030-58536-5_24
– ident: 2191_CR46
  doi: 10.1109/CVPR.2016.207
– ident: 2191_CR64
  doi: 10.1109/ICCV.2015.164
– ident: 2191_CR32
  doi: 10.1109/ICCV.2019.00336
– volume: 132
  start-page: 1
  issue: 2
  year: 2023
  ident: 2191_CR25
  publication-title: International Journal of Computer Vision
– ident: 2191_CR40
  doi: 10.1109/CVPR52688.2022.01580
– ident: 2191_CR47
  doi: 10.1007/978-3-642-33715-4_54
– ident: 2191_CR55
  doi: 10.1109/ICCV.2019.00310
– volume: 38
  start-page: 295
  issue: 2
  year: 2015
  ident: 2191_CR10
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2015.2439281
– ident: 2191_CR37
– ident: 2191_CR69
  doi: 10.1109/CVPR.2017.660
– ident: 2191_CR5
  doi: 10.1007/978-3-030-01234-2_49
– ident: 2191_CR2
  doi: 10.1007/3-540-47967-8_8
– ident: 2191_CR49
  doi: 10.1109/ICCV.2019.00533
– ident: 2191_CR9
  doi: 10.1109/CVPR46437.2021.00677
– ident: 2191_CR72
  doi: 10.1109/CVPR.2017.544
– ident: 2191_CR44
  doi: 10.1109/CVPR52688.2022.01042
– ident: 2191_CR54
  doi: 10.1109/ICCV.1998.710815
– volume: 39
  start-page: 2481
  issue: 12
  year: 2017
  ident: 2191_CR1
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2016.2644615
– ident: 2191_CR4
  doi: 10.1109/ICCV48922.2021.00951
– volume: 34
  start-page: 11418
  year: 2020
  ident: 2191_CR24
  publication-title: Proceedings of the AAAI Conference on Artificial Intelligence
  doi: 10.1609/aaai.v34i07.6805
– ident: 2191_CR71
  doi: 10.1109/CVPR.2016.319
– ident: 2191_CR27
  doi: 10.1007/978-3-319-10602-1_48
– ident: 2191_CR11
– ident: 2191_CR67
  doi: 10.1007/978-3-319-10590-1_53
– volume: 129
  start-page: 2375
  issue: 8
  year: 2021
  ident: 2191_CR66
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-021-01465-9
– ident: 2191_CR36
– ident: 2191_CR43
  doi: 10.1109/CVPR.2009.5206503
– ident: 2191_CR23
  doi: 10.1007/978-3-030-58452-8_45
– ident: 2191_CR29
  doi: 10.1109/CVPR.2018.00913
– ident: 2191_CR17
  doi: 10.1109/ICCV48922.2021.00090
– volume: 1
  start-page: e3
  issue: 10
  year: 2016
  ident: 2191_CR39
  publication-title: Deconvolution and checkerboard artifacts. Distill
– volume: 97
  start-page: 6105
  year: 2019
  ident: 2191_CR50
  publication-title: Proceedings of the International Conference on Machine Learning
– ident: 2191_CR3
  doi: 10.1109/CVPR.2018.00591
– ident: 2191_CR62
  doi: 10.1007/978-3-030-01228-1_26
– volume: 44
  start-page: 242
  issue: 1
  year: 2022
  ident: 2191_CR33
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2020.3004474
– ident: 2191_CR70
  doi: 10.1109/CVPR46437.2021.00681
– ident: 2191_CR7
  doi: 10.1007/978-3-030-58568-6_39
SSID ssj0002823
Score 2.501517
Snippet The goal of this work is to develop a task-agnostic feature upsampling operator for dense prediction where the operator is required to facilitate not only...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 151
SubjectTerms Artificial Intelligence
Computer Imaging
Computer Science
Encoders-Decoders
Image Processing and Computer Vision
Image segmentation
Parameter sensitivity
Pattern Recognition
Pattern Recognition and Graphics
Semantic segmentation
Semantics
Vision
Title FADE: A Task-Agnostic Upsampling Operator for Encoder–Decoder Architectures
URI https://link.springer.com/article/10.1007/s11263-024-02191-8
https://www.proquest.com/docview/3152332185
Volume 133
WOSCitedRecordID wos001273975600001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAVX
  databaseName: SpringerLink Journals
  customDbUrl:
  eissn: 1573-1405
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0002823
  issn: 0920-5691
  databaseCode: RSV
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1PS8MwFA86PXhx_sXplBy8aaFt0jb1VnTDg07BbexWkjQRUepYp2e_g9_QT-JL1q4qKuihUNI0hJfkvd_L-4fQoaY-zWIROFLJyKGaAB80GWgDHoVcRRmX9h5yeBH1emw0iq_LoLCi8navTJKWU9fBbp5vbY4UHtAyHLaIlgKTbcbo6DfDOf8FJWJWQB4UoyCMvTJU5vsxPoujGmN-MYtaadNt_m-ea2i1RJc4mW2HdbSg8g3ULJEmLs9xAU1VMYeqbRNddmFNTnCC-7y4dxLjfweD4MG44MbpPL_FV2NlbfIYcC7u5CYYfvL28nqm7BtOPpgkii006Hb6p-dOWWvBkXAIp47wiaax5CTwZKAzgAGScaEN2uC-IJEXc0EIwCGtY5dqkGki5lzSMOReyCKfbKNG_pirHYSZ1iHXVGRGuXFdzl1XS84E01T5sQpayKtInsoyEbmph_GQ1imUDQlTIGFqSZiyFjqa_zOepeH4tXe7Wsm0PJJFSgCpEAKIBiZwXK1c_fnn0Xb_1n0PrfimRrC9pmmjxnTypPbRsnye3hWTA7tV3wFfy-Fj
linkProvider Springer Nature
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1bS8MwFA46BX1xXnE6NQ--aaFt0ptvRTcmblNwG76FNEtElDrW6bP_wX_oL_Eka1cVFfShUNI0hHNyku_k3BA6VNSlwyjxLCFFYFFFYB_UGWg9HvhcBkMuzD3koB10u-HNTXSVB4Vlhbd7YZI0O3UZ7Oa4xuZI4QEtwwrn0QLVZXa0jn49mO2_oERMC8iDYuT5kZOHynw_xufjqMSYX8yi5rRpVv83z1W0kqNLHE-Xwxqak-k6quZIE-dynEFTUcyhaNtAnSbw5ATHuMezeyvW_ncwCO6PMq6dztNbfDmSxiaPAefiRqqD4cdvL69n0rzh-INJIttE_Wajd9qy8loLlgAhnFiJSxSNBCeeIzw1BBggQp4ojTa4m5DAiXhCCMAhpSKbKjjTkohzQX2fO34YuGQLVdLHVG4jHCrlc0WToVZubJtz21aCh0moqHQj6dWQU5CciTwRua6H8cDKFMqahAxIyAwJWVhDR7N_RtM0HL_2rhecZLlIZowAUiEEEA1M4LjgXPn559F2_tb9AC21ep02a593L3bRsqvrBZsrmzqqTMZPcg8tiufJXTbeN8v2Hfj05Ec
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1bS8MwFA46RXxxXnE6NQ--abFt0ptvw20ozjl0G3sraZqIKHWs1Wf_g__QX-JJ1m5TVBAfCiVNQzgnJ_lOzg2hQ0ltGgeRY3DBPYNKAvugykDrMM9lwosZ1_eQ_ZbXbvuDQdCZieLX3u6FSXIc06CyNCXZyTCWJ9PAN8vW9kcKD2gchj-PFihoMsqp6-a2P9mLQaEYF5MHJclxAysPm_l-jM9H0xRvfjGR6pOnWf7_nFfRSo46cW28TNbQnEjWUTlHoDiX7xSaiiIPRdsGumoCr05xDXdZ-mDUlF8eDIJ7w5QpZ_TkDl8PhbbVY8C_uJGoIPnR--tbXeg3XJsxVaSbqNdsdM_OjbwGg8FBODMjsomkAWfEsbgjY4AH3GeRVCiE2RHxrIBFhABMkjIwqYSzLgoY49R1meX6nk22UCl5SsQ2wr6ULpM0ipXSY5qMmabkzI98SYUdCKeCrIL8Ic8TlKs6GY_hNLWyImEIJAw1CUO_go4m_wzH6Tl-7V0tuBrmopqGBBAMIYB0YALHBRenn38ebedv3Q_QUqfeDFsX7ctdtGyrMsL6JqeKStnoWeyhRf6S3aejfb2CPwD4tu0r
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=FADE%3A+A+Task-Agnostic+Upsampling+Operator+for+Encoder%E2%80%93Decoder+Architectures&rft.jtitle=International+journal+of+computer+vision&rft.date=2025-01-01&rft.pub=Springer+Nature+B.V&rft.issn=0920-5691&rft.eissn=1573-1405&rft.volume=133&rft.issue=1&rft.spage=151&rft.epage=172&rft_id=info:doi/10.1007%2Fs11263-024-02191-8&rft.externalDBID=HAS_PDF_LINK
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0920-5691&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0920-5691&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0920-5691&client=summon