Exploiting Diffusion Prior for Real-World Image Super-Resolution

We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution. Specifically, by employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model, thereby pre...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:International journal of computer vision Ročník 132; číslo 12; s. 5929 - 5949
Hlavní autoři: Wang, Jianyi, Yue, Zongsheng, Zhou, Shangchen, Chan, Kelvin C. K., Loy, Chen Change
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Springer US 01.12.2024
Springer
Springer Nature B.V
Témata:
ISSN:0920-5691, 1573-1405
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution. Specifically, by employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model, thereby preserving the generative prior and minimizing training cost. To remedy the loss of fidelity caused by the inherent stochasticity of diffusion models, we employ a controllable feature wrapping module that allows users to balance quality and fidelity by simply adjusting a scalar value during the inference process. Moreover, we develop a progressive aggregation sampling strategy to overcome the fixed-size constraints of pre-trained diffusion models, enabling adaptation to resolutions of any size. A comprehensive evaluation of our method using both synthetic and real-world benchmarks demonstrates its superiority over current state-of-the-art approaches. Code and models are available at https://github.com/IceClear/StableSR .
AbstractList We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution. Specifically, by employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model, thereby preserving the generative prior and minimizing training cost. To remedy the loss of fidelity caused by the inherent stochasticity of diffusion models, we employ a controllable feature wrapping module that allows users to balance quality and fidelity by simply adjusting a scalar value during the inference process. Moreover, we develop a progressive aggregation sampling strategy to overcome the fixed-size constraints of pre-trained diffusion models, enabling adaptation to resolutions of any size. A comprehensive evaluation of our method using both synthetic and real-world benchmarks demonstrates its superiority over current state-of-the-art approaches. Code and models are available at https://github.com/IceClear/StableSR.
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution. Specifically, by employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model, thereby preserving the generative prior and minimizing training cost. To remedy the loss of fidelity caused by the inherent stochasticity of diffusion models, we employ a controllable feature wrapping module that allows users to balance quality and fidelity by simply adjusting a scalar value during the inference process. Moreover, we develop a progressive aggregation sampling strategy to overcome the fixed-size constraints of pre-trained diffusion models, enabling adaptation to resolutions of any size. A comprehensive evaluation of our method using both synthetic and real-world benchmarks demonstrates its superiority over current state-of-the-art approaches. Code and models are available at
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution. Specifically, by employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model, thereby preserving the generative prior and minimizing training cost. To remedy the loss of fidelity caused by the inherent stochasticity of diffusion models, we employ a controllable feature wrapping module that allows users to balance quality and fidelity by simply adjusting a scalar value during the inference process. Moreover, we develop a progressive aggregation sampling strategy to overcome the fixed-size constraints of pre-trained diffusion models, enabling adaptation to resolutions of any size. A comprehensive evaluation of our method using both synthetic and real-world benchmarks demonstrates its superiority over current state-of-the-art approaches. Code and models are available at https://github.com/IceClear/StableSR .
Audience Academic
Author Yue, Zongsheng
Zhou, Shangchen
Loy, Chen Change
Chan, Kelvin C. K.
Wang, Jianyi
Author_xml – sequence: 1
  givenname: Jianyi
  orcidid: 0000-0001-7025-3626
  surname: Wang
  fullname: Wang, Jianyi
  organization: S-Lab, Nanyang Technological University
– sequence: 2
  givenname: Zongsheng
  surname: Yue
  fullname: Yue, Zongsheng
  organization: S-Lab, Nanyang Technological University
– sequence: 3
  givenname: Shangchen
  surname: Zhou
  fullname: Zhou, Shangchen
  organization: S-Lab, Nanyang Technological University
– sequence: 4
  givenname: Kelvin C. K.
  surname: Chan
  fullname: Chan, Kelvin C. K.
  organization: S-Lab, Nanyang Technological University
– sequence: 5
  givenname: Chen Change
  surname: Loy
  fullname: Loy, Chen Change
  email: ccloy@ntu.edu.sg
  organization: S-Lab, Nanyang Technological University
BookMark eNp9kU1LxDAQhoMouH78AU8FTx6imaRtkpvi54KgrIrHkKZpiXSbNWlh_fdmXUFWRMIQCM-TGebdQ9u97y1CR0BOgRB-FgFoyTCheSooBeZbaAIFZxhyUmyjCZGU4KKUsIv2YnwjhFBB2QSdXy8XnXeD69vsyjXNGJ3vs8fgfMiaVDOrO_zqQ1dn07lubfY0LmzAMxt9Nw6JPUA7je6iPfy-99HLzfXz5R2-f7idXl7cY5NTMmDaFLTMbQUSag5AKk4sN5WknNEql5ZrWQugrCI1VKLMDS2r3EjNRcV4IQXbR8frfxfBv482DurNj6FPLRUDKoTkOaU_VKs7q1zf-CFoM3fRqAsBJQOSwESd_kGlU9u5M2mzjUvvG8LJhpCYwS6HVo8xqunTbJOla9YEH2OwjVoEN9fhQwFRq7DUOiyVwlJfYamVJH5Jxg16td80mev-V9lajalP39rws5h_rE8FDaaj
CitedBy_id crossref_primary_10_1038_s41598_025_96185_2
crossref_primary_10_3788_COL202523_081105
crossref_primary_10_1007_s11042_023_17811_7
crossref_primary_10_1190_geo2024_0916_1
crossref_primary_10_1109_TGRS_2025_3571290
crossref_primary_10_1117_1_JEI_34_3_033023
crossref_primary_10_1007_s12008_025_02290_x
crossref_primary_10_1145_3744650
crossref_primary_10_1109_ACCESS_2025_3529758
crossref_primary_10_1109_TPAMI_2025_3545047
crossref_primary_10_1109_TGRS_2025_3556448
crossref_primary_10_1007_s00371_024_03717_4
crossref_primary_10_1007_s10278_025_01469_8
crossref_primary_10_1117_1_JEI_34_2_023015
crossref_primary_10_1016_j_eswa_2025_129033
crossref_primary_10_1038_s41598_025_07650_x
crossref_primary_10_1109_TGRS_2025_3591923
crossref_primary_10_1038_s41598_025_14581_0
crossref_primary_10_1016_j_inffus_2025_103676
crossref_primary_10_3390_rs17081348
crossref_primary_10_1016_j_patcog_2025_112421
crossref_primary_10_1007_s11263_024_02295_1
crossref_primary_10_1109_TCSVT_2025_3545606
crossref_primary_10_1007_s11263_025_02424_4
crossref_primary_10_1109_LSP_2024_3516699
crossref_primary_10_1007_s11760_025_04350_x
crossref_primary_10_1080_19392699_2025_2524489
crossref_primary_10_1109_LSP_2024_3512371
crossref_primary_10_1016_j_displa_2025_103153
crossref_primary_10_1111_mice_70033
crossref_primary_10_1109_TPAMI_2025_3545571
crossref_primary_10_1109_JSTSP_2024_3454957
crossref_primary_10_1098_rsta_2024_0358
crossref_primary_10_1007_s11263_025_02570_9
crossref_primary_10_1016_j_eswa_2025_129201
crossref_primary_10_1109_TVCG_2025_3566315
crossref_primary_10_1109_TPAMI_2025_3541625
crossref_primary_10_1109_TPAMI_2024_3461721
crossref_primary_10_1109_TVCG_2025_3550844
crossref_primary_10_1007_s00371_025_04178_z
crossref_primary_10_1016_j_dsp_2025_105556
crossref_primary_10_1007_s11263_025_02498_0
crossref_primary_10_1016_j_image_2025_117398
crossref_primary_10_1016_j_imavis_2025_105699
crossref_primary_10_1093_nsr_nwaf235
crossref_primary_10_1007_s42423_025_00175_5
crossref_primary_10_1109_JSTARS_2025_3542766
crossref_primary_10_1007_s10489_025_06490_6
crossref_primary_10_1007_s11831_025_10266_z
crossref_primary_10_26599_BDMA_2025_9020007
crossref_primary_10_3390_s25185768
crossref_primary_10_3390_electronics14102003
crossref_primary_10_1109_LSP_2025_3586178
crossref_primary_10_1109_TIP_2025_3599946
crossref_primary_10_1109_TPAMI_2025_3538896
crossref_primary_10_3390_math13132079
crossref_primary_10_1007_s11760_025_04373_4
crossref_primary_10_1109_TASE_2024_3482362
crossref_primary_10_1109_JSTARS_2025_3594062
crossref_primary_10_1109_TPAMI_2024_3432651
crossref_primary_10_1016_j_knosys_2025_114071
crossref_primary_10_1049_ipr2_70028
crossref_primary_10_1002_ima_70067
crossref_primary_10_1016_j_aej_2025_03_067
crossref_primary_10_1109_TPAMI_2025_3584921
crossref_primary_10_1007_s11760_025_04292_4
crossref_primary_10_1109_LSP_2025_3591408
crossref_primary_10_3390_jimaging11050136
crossref_primary_10_1007_s11263_025_02514_3
crossref_primary_10_1007_s11263_025_02516_1
crossref_primary_10_1016_j_isprsjprs_2025_07_020
crossref_primary_10_1109_TPAMI_2024_3498003
Cites_doi 10.1109/ICCV.2019.00140
10.1109/CVPR.2018.00068
10.1109/CVPR46437.2021.00073
10.1109/ICCVW54120.2021.00217
10.1007/978-3-030-11021-5_5
10.1145/3610548.3618173
10.1109/CVPRW.2017.150
10.1109/CVPR52688.2022.01043
10.1109/CVPR52688.2022.01118
10.1109/ICCV48922.2021.00475
10.1109/CVPR52733.2024.02425
10.1109/CVPR42600.2020.00308
10.1109/CVPR46437.2021.01402
10.1007/978-3-031-73016-0_6
10.1609/aaai.v38i5.28226
10.1109/ICCVW.2019.00445
10.1109/ICCV.2017.244
10.1109/ICCV.2017.36
10.1109/CVPR52688.2022.00750
10.1145/3503161.3547833
10.1109/TPAMI.2015.2439281
10.1109/CVPR.2019.00453
10.1109/CVPR42600.2020.00583
10.1109/ICCV.2017.355
10.1109/CVPR46437.2021.00214
10.1007/978-3-319-24574-4_28
10.1109/CVPR42600.2020.00251
10.1007/978-3-030-01234-2_18
10.1109/CVPR42600.2020.00282
10.1109/CVPR52688.2022.01042
10.1109/ICCV51070.2023.00355
10.1007/978-3-031-19797-0_33
10.1109/CVPR.2018.00652
10.1109/ICCV.2019.00318
10.1109/CVPR46437.2021.01318
10.1109/TPAMI.2022.3186715
10.1007/978-3-030-58598-3_7
10.1109/CVPR46437.2021.01212
10.1109/CVPR.2019.00182
10.1109/CVPR.2019.00183
10.1016/j.neucom.2022.02.082
10.1109/TPAMI.2022.3204461
10.1145/3528233.3530757
10.1007/978-3-030-01231-1_6
10.1109/ICCV.2017.481
10.1109/ICCV48922.2021.01410
10.1109/CVPR46437.2021.00905
10.1109/CVPR.2019.01132
10.1109/ICCV48922.2021.00510
10.1007/978-3-319-10593-2_13
10.1007/978-3-319-46475-6_25
10.1109/ICCVW54120.2021.00210
10.1109/ICCV51070.2023.00701
10.1109/ICCVW.2019.00435
10.1109/CVPR.2019.00817
10.1109/CVPR.2017.19
10.1109/CVPR52688.2022.01767
10.1109/CVPRW50498.2020.00241
10.1109/CVPR46437.2021.01044
10.1109/CVPR.2018.00070
10.1609/aaai.v37i2.25353
10.1109/CVPR42600.2020.00037
10.1109/TPAMI.2021.3115428
10.1109/CVPR.2017.195
10.1109/ICCV48922.2021.00986
10.1109/ICCV51070.2023.01460
10.1109/CVPR.2018.00259
10.1109/TPAMI.2024.3461721
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
COPYRIGHT 2024 Springer
Copyright_xml – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
– notice: COPYRIGHT 2024 Springer
DBID AAYXX
CITATION
ISR
3V.
7SC
7WY
7WZ
7XB
87Z
8AL
8FD
8FE
8FG
8FK
8FL
ABUWG
AFKRA
ARAPS
AZQEC
BENPR
BEZIV
BGLVJ
CCPQU
DWQXO
FRNLG
F~G
GNUQQ
HCIFZ
JQ2
K60
K6~
K7-
L.-
L7M
L~C
L~D
M0C
M0N
P5Z
P62
PHGZM
PHGZT
PKEHL
PQBIZ
PQBZA
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
PYYUZ
Q9U
DOI 10.1007/s11263-024-02168-7
DatabaseName CrossRef
Gale In Context: Science
ProQuest Central (Corporate)
Computer and Information Systems Abstracts
ABI/INFORM Collection
ABI/INFORM Global (PDF only)
ProQuest Central (purchase pre-March 2016)
ABI/INFORM Collection
Computing Database (Alumni Edition)
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central (Alumni) (purchase pre-March 2016)
ABI/INFORM Collection (Alumni)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
Advanced Technologies & Computer Science Collection
ProQuest Central Essentials - QC
ProQuest Central
Business Premium Collection
ProQuest Technology Collection
ProQuest One
ProQuest Central Korea
Business Premium Collection (Alumni)
ABI/INFORM Global (Corporate)
ProQuest Central Student
SciTech Premium Collection
ProQuest Computer Science Collection
ProQuest Business Collection (Alumni Edition)
ProQuest Business Collection
Computer Science Database (ProQuest)
ABI/INFORM Professional Advanced
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
ABI/INFORM Global
Computing Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Premium
ProQuest One Academic (New)
ProQuest One Academic Middle East (New)
ProQuest One Business
ProQuest One Business (Alumni)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
ABI/INFORM Collection China
ProQuest Central Basic
DatabaseTitle CrossRef
ABI/INFORM Global (Corporate)
ProQuest Business Collection (Alumni Edition)
ProQuest One Business
Computer Science Database
ProQuest Central Student
Technology Collection
Technology Research Database
Computer and Information Systems Abstracts – Academic
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest Central China
ABI/INFORM Complete
ProQuest Central
ABI/INFORM Professional Advanced
ProQuest One Applied & Life Sciences
ProQuest Central Korea
ProQuest Central (New)
Advanced Technologies Database with Aerospace
ABI/INFORM Complete (Alumni Edition)
Advanced Technologies & Aerospace Collection
Business Premium Collection
ABI/INFORM Global
ProQuest Computing
ABI/INFORM Global (Alumni Edition)
ProQuest Central Basic
ProQuest Computing (Alumni Edition)
ProQuest One Academic Eastern Edition
ABI/INFORM China
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Business Collection
Computer and Information Systems Abstracts Professional
Advanced Technologies & Aerospace Database
ProQuest One Academic UKI Edition
ProQuest One Business (Alumni)
ProQuest One Academic
ProQuest Central (Alumni)
ProQuest One Academic (New)
Business Premium Collection (Alumni)
DatabaseTitleList ABI/INFORM Global (Corporate)


Database_xml – sequence: 1
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Computer Science
EISSN 1573-1405
EndPage 5949
ExternalDocumentID A816310897
10_1007_s11263_024_02168_7
GrantInformation_xml – fundername: National Research Foundation Singapore
  grantid: AISG2-PhD-2022-01-033[T]
  funderid: http://dx.doi.org/10.13039/501100001381
GroupedDBID -Y2
-~C
.4S
.86
.DC
.VR
06D
0R~
0VY
199
1N0
1SB
2.D
203
28-
29J
2J2
2JN
2JY
2KG
2KM
2LR
2P1
2VQ
2~H
30V
4.4
406
408
409
40D
40E
5GY
5QI
5VS
67Z
6NX
6TJ
78A
7WY
8FE
8FG
8FL
8TC
8UJ
95-
95.
95~
96X
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AAPKM
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBRH
ABBXA
ABDBE
ABDBF
ABDZT
ABECU
ABFTD
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABUWG
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFO
ACGFS
ACHSB
ACHXU
ACIHN
ACKNC
ACMDZ
ACMFV
ACMLO
ACOKC
ACOMO
ACPIV
ACREN
ACUHS
ACZOJ
ADHHG
ADHIR
ADHKG
ADIMF
ADKFA
ADKNI
ADKPE
ADMLS
ADRFC
ADTPH
ADURQ
ADYFF
ADYOE
ADZKW
AEAQA
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFDZB
AFEXP
AFGCZ
AFKRA
AFLOW
AFOHR
AFQWF
AFWTZ
AFYQB
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGQPQ
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHPBZ
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMTXH
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARCSS
ARMRJ
ASPBG
ATHPR
AVWKF
AXYYD
AYFIA
AYJHY
AZFZN
AZQEC
B-.
B0M
BA0
BBWZM
BDATZ
BENPR
BEZIV
BGLVJ
BGNMA
BPHCQ
BSONS
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
DWQXO
EAD
EAP
EAS
EBLON
EBS
EDO
EIOEI
EJD
EMK
EPL
ESBYG
ESX
F5P
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRNLG
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNUQQ
GNWQR
GQ7
GQ8
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I-F
I09
IAO
ICD
IHE
IJ-
IKXTQ
ISR
ITC
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Y
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K60
K6V
K6~
K7-
KDC
KOV
KOW
LAK
LLZTM
M0C
M4Y
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
OVD
P19
P2P
P62
P9O
PF0
PHGZM
PHGZT
PQBIZ
PQBZA
PQQKQ
PROAC
PT4
PT5
QF4
QM1
QN7
QO4
QOK
QOS
R4E
R89
R9I
RHV
RNI
RNS
ROL
RPX
RSV
RZC
RZE
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TAE
TEORI
TSG
TSK
TSV
TUC
TUS
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
ZMTXR
~8M
~EX
AAYXX
ABFSG
ABRTQ
ACSTC
AEZWR
AFFHD
AFHIU
AHWEU
AIXLP
CITATION
PQGLB
3V.
7SC
7XB
8AL
8FD
8FK
JQ2
L.-
L7M
L~C
L~D
M0N
PKEHL
PQEST
PQUKI
PRINS
Q9U
ID FETCH-LOGICAL-c420t-2f5264eb191d7110b70e7cb92732b49e7a9d8123b0d1b864c26b4c9a78b375983
IEDL.DBID M0C
ISICitedReferencesCount 101
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001269395000002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0920-5691
IngestDate Tue Dec 02 07:52:00 EST 2025
Wed Dec 10 10:40:21 EST 2025
Tue Dec 02 03:51:47 EST 2025
Tue Dec 02 03:30:42 EST 2025
Tue Nov 18 21:23:22 EST 2025
Sat Nov 29 06:42:31 EST 2025
Thu May 22 04:31:25 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 12
Keywords Generative prior
Diffusion models
Super-resolution
Image restoration
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c420t-2f5264eb191d7110b70e7cb92732b49e7a9d8123b0d1b864c26b4c9a78b375983
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0001-7025-3626
PQID 3128897422
PQPubID 1456341
PageCount 21
ParticipantIDs proquest_journals_3128897422
gale_infotracmisc_A816310897
gale_infotracacademiconefile_A816310897
gale_incontextgauss_ISR_A816310897
crossref_primary_10_1007_s11263_024_02168_7
crossref_citationtrail_10_1007_s11263_024_02168_7
springer_journals_10_1007_s11263_024_02168_7
PublicationCentury 2000
PublicationDate 20241200
2024-12-00
20241201
PublicationDateYYYYMMDD 2024-12-01
PublicationDate_xml – month: 12
  year: 2024
  text: 20241200
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle International journal of computer vision
PublicationTitleAbbrev Int J Comput Vis
PublicationYear 2024
Publisher Springer US
Springer
Springer Nature B.V
Publisher_xml – name: Springer US
– name: Springer
– name: Springer Nature B.V
References Xu, X., Ma, Y., & Sun, W. (2019). Towards real scene super-resolution with raw images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Ji, X., Cao, Y., Tai, Y., Wang, C., Li, J., & Huang, F. (2020). Real-world super-resolution via kernel estimation and noise injection. In Proceedings of the IEEE/CVF international conference on computer vision workshops (CVPR-W).
Ke, J., Wang, Q., Wang, Y., Milanfar, P., & Yang, F. (2021). Musiq: Multi-scale image quality transformer. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
Balaji, Y., Nah, S., Huang, X., Vahdat, A., Song, J., Kreis, K., Aittala, M., Aila, T., Laine, S., Catanzaro, B., Karras, T., & Liu, M. Y. (2022). ediff-i: Text-to-image diffusion models with ensemble of expert denoisers. arXiv preprint arXiv:2211.01324
Liang, J., Zeng, H., & Zhang, L. (2022). Efficient and degradation-adaptive network for real-world image super-resolution. In Proceedings of the European conference on computer vision (ECCV).
Howard, A., Sandler, M., Chu, G., Chen, L. C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., & Le, Q. V. (2019). Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980
Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., & Ganguli, S. (2015). Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of international conference on machine learning (ICML).
Wei, Y., Gu, S., Li, Y., Timofte, R., & Jin, L., Song, H. (2021). Unsupervised real-world image super resolution via domain-distance aware training. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Choi, J., Lee, J., Shin, C., Kim, S., Kim, H., & Yoon, S. (2022). Perception prioritized training of diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Agustsson, E., & Timofte, R. (2017). Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE/CVF international conference on computer vision workshops (CVPR-W).
Chung, H., Sim, B., Ryu, D., & Ye, J. C. (2022). Improving diffusion models for inverse problems using manifold constraints. In Proceedings of advances in neural information processing systems (NeurIPS).
Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., & Rombach, R. (2023). Sdxl: Improving latent diffusion models for high-resolution image synthesis. In Proceedings of international conference on learning representations (ICLR).
Wang, J., Chan, K. C., & Loy, C. C. (2023). Exploring clip for assessing the look and feel of images. In Proceedings of the AAAI conference on artificial intelligence.
Dai, T., Cai, J., Zhang, Y., Xia, S. T., & Zhang, L. (2019). Second-order attention network for single image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Pan, X., Zhan, X., Dai, B., Lin, D., Loy, C. C., & Luo, P. (2021). Exploiting deep generative prior for versatile image restoration and manipulation. In IEEE transactions on pattern analysis and machine intelligence (TPAMI).
Sauer, A., Lorenz, D., Blattmann, A., & Rombach, R. (2023). Adversarial diffusion distillation. arXiv preprint arXiv:2311.17042
Gu, S., Lugmayr, A., Danelljan, M., Fritsche, M., Lamour, J., & Timofte, R. (2019). Div8k: Diverse 8k resolution image dataset. In Proceedings of the IEEE/CVF international conference on computer vision workshops (ICCV-W).
Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., & Shi, W. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125
Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Deep-floyd. (2023). If. https://github.com/deep-floyd/IF
Timofte, R., Agustsson, E., Van Gool, L., Yang, M. H., & Zhang, L. (2017). Ntire 2017 challenge on single image super-resolution: Methods and results. In Proceedings of the IEEE/CVF international conference on computer vision workshops (CVPR-W).
Wang, X., Yu, K., Dong, C., & Loy, C. C. (2018a). Recovering realistic texture in image super-resolution by deep spatial feature transform. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Hertz, A., Mokady, R., Tenenbaum, J., Aberman, K., Pritch, Y., & Cohen-Or, D. (2022). Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626
Saharia, C., Ho, J., Chan, W., Salimans, T., Fleet, D. J., & Norouzi, M. (2022b). Image super-resolution via iterative refinement. In IEEE transactions on pattern analysis and machine intelligence (TPAMI).
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Proceedings of advances in neural information processing systems (NeurIPS).
Qi, C., Cun, X., Zhang, Y., Lei, C., Wang, X., Shan, Y., & Chen, Q. (2023). Fatezero: Fusing attentions for zero-shot text-based video editing. arXiv preprint arXiv:2303.09535
Chan, K. C., Wang, X., Xu, X., Gu, J., & Loy, C. C. (2021). GLEAN: Generative latent bank for large-factor image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Nichol, A. Q., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., Mcgrew, B., Sutskever, I., & Chen, M. (2022). Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In Proceedings of international conference on machine learning (ICML).
Sahak, H., Watson, D., Saharia, C., & Fleet, D. (2023). Denoising diffusion probabilistic models for robust image super-resolution in the wild. arXiv preprint arXiv:2302.07864
Feng, W., He, X., Fu, T. J., Jampani, V., Akula, A., Narayana, P., Basu, S., Wang, X. E., & Wang, W. Y. (2023). Training-free structured diffusion guidance for compositional text-to-image synthesis. In Proceedings of international conference on learning representations (ICLR).
Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., & Fu, Y. (2018b). Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV).
Jiang, Y., Chan, K. C., Wang, X., Loy, C. C., & Liu, Z. (2021). Robust reference-based super-resolution via c2-matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Yang, S., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., & Poole, B. (2021a). Score-based generative modeling through stochastic differential equations. In Proceedings of international conference on learning representations (ICLR).
Zhang, K., Liang, J., Van Gool, L., & Timofte, R. (2021b). Designing a practical degradation model for deep blind image super-resolution. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
He, X., Mo, Z., Wang, P., Liu, Y., Yang, M., & Cheng, J. (2019). Ode-inspired network design for single image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Choi, J., Kim, S., Jeong, Y., Gwon, Y., & Yoon, S. (2021). Ilvr: Conditioning method for denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
LiHYangYChangMChenSFengHXuZLiQChenYSRDiff: Single image super-resolution with diffusion probabilistic modelsNeurocomputing202266610.1016/j.neucom.2022.02.082
Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., & Sutskever, I. (2021). Zero-shot text-to-image generation. In Proceedings of international conference on machine learning (ICML).
Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., & Chen, W. (2022). Lora: Low-rank adaptation of large language models. In Proceedings of international conference on learning representations (ICLR).
Dong, C., Loy, C. C., He, K., & Tang, X. (2014). Learning a deep convolutional network for image super-resolution. In Proceedings of the European conference on computer vision (ECCV).
Dong, C., Loy, C. C., He, K., & Tang, X. (2015). Image super-resolution using deep convolutional networks. In IEEE transactions on pattern analysis and machine intelligence (TPAMI).
Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., & Van Gool, L. (2017). Dslr-quality photos on mobile devices with deep convolutional networks. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV)
Wei, P., Xie, Z., Lu, H., Zhan, Z., Ye, Q., Zuo, W., & Lin, L. (2020). Component divide-and-conquer for real-world image super-resolution. In Proceedings of the European conference on computer vision (ECCV).
Song, J., Meng, C., & Ermon, S. (2020). Denoising diffusion implicit models. In Proceedings of international conference on learning representations (ICLR).
Wu, J. Z., Ge, Y., Wang, X., Lei, S. W., Gu, Y., Hsu, W., Shan, Y., Qie, X., & Shou, M. Z. (2022). Tune-A-Video: One-shot tuning of image diffusion models for text-to-video generation. arXiv preprint arXiv:2212.11565
Avrahami, O., Lischinski, D., & Fried, O. (2022). Blended diffusion for text-driven editing of natural images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., & Gao, W. (2021). Pre-tra
2168_CR80
2168_CR82
2168_CR81
2168_CR84
2168_CR83
2168_CR86
2168_CR85
H Li (2168_CR42) 2022; 6
2168_CR88
2168_CR87
2168_CR89
2168_CR71
2168_CR70
2168_CR72
2168_CR75
2168_CR74
2168_CR77
2168_CR76
2168_CR79
2168_CR78
2168_CR69
2168_CR60
2168_CR62
2168_CR61
2168_CR64
2168_CR63
2168_CR66
2168_CR65
2168_CR68
2168_CR67
2168_CR59
2168_CR58
2168_CR51
2168_CR50
2168_CR53
2168_CR52
2168_CR55
2168_CR54
2168_CR57
2168_CR56
2168_CR48
2168_CR47
2168_CR49
2168_CR40
2168_CR41
2168_CR44
2168_CR43
2168_CR46
2168_CR45
2168_CR37
2168_CR36
2168_CR39
2168_CR38
2168_CR31
2168_CR30
2168_CR33
2168_CR32
2168_CR35
2168_CR34
2168_CR26
2168_CR25
2168_CR28
2168_CR27
2168_CR29
EL Thorndike (2168_CR73) 1920; 6
2168_CR4
2168_CR3
2168_CR2
2168_CR1
2168_CR8
2168_CR7
2168_CR6
2168_CR20
2168_CR5
2168_CR22
2168_CR21
2168_CR24
2168_CR9
2168_CR23
2168_CR15
2168_CR14
2168_CR17
2168_CR16
2168_CR19
2168_CR100
2168_CR18
2168_CR101
2168_CR102
2168_CR103
2168_CR104
2168_CR105
2168_CR91
2168_CR90
2168_CR93
2168_CR92
2168_CR95
2168_CR94
2168_CR97
2168_CR96
2168_CR11
2168_CR99
2168_CR10
2168_CR98
2168_CR13
2168_CR12
References_xml – reference: Ho, J., & Salimans, T. (2021). Classifier-free diffusion guidance. In Proceedings of advances in neural information processing systems (NeurIPS).
– reference: Dong, C., Loy, C. C., He, K., & Tang, X. (2015). Image super-resolution using deep convolutional networks. In IEEE transactions on pattern analysis and machine intelligence (TPAMI).
– reference: Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E. L., Ghasemipour, K., Gontijo Lopes, R., Karagol Ayan, B., Salimans, T., & Ho, J. (2022a). Photorealistic text-to-image diffusion models with deep language understanding. In Proceedings of advances in neural information processing systems (NeurIPS).
– reference: Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., & Loy, C. C. (2018b). Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European conference on computer vision workshops (ECCV-W).
– reference: Zhou, S., Chan, K. C., Li, C., & Loy, C. C. (2022). Towards robust blind face restoration with codebook lookup transformer. In Proceedings of advances in neural information processing systems (NeurIPS).
– reference: Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
– reference: Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. In Proceedings of advances in neural information processing systems (NeurIPS) (vol. 33).
– reference: Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C., & Zhu, J. (2022). Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. In Proceedings of advances in neural information processing systems (NeurIPS).
– reference: Chan, K. C., Wang, X., Xu, X., Gu, J., & Loy, C. C. (2022). GLEAN: Generative latent bank for large-factor image super-resolution and beyond. In IEEE transactions on pattern analysis and machine intelligence (TPAMI).
– reference: Ji, X., Cao, Y., Tai, Y., Wang, C., Li, J., & Huang, F. (2020). Real-world super-resolution via kernel estimation and noise injection. In Proceedings of the IEEE/CVF international conference on computer vision workshops (CVPR-W).
– reference: Lin, X., He, J., Chen, Z., Lyu, Z., Fei, B., Dai, B., Ouyang, W., Qiao, Y., & Dong, C. (2023). Diffbir: Towards blind image restoration with generative diffusion prior. arXiv preprint arXiv:2308.15070
– reference: Yu, K., Dong, C., Lin, L., & Loy, C. C. (2018). Crafting a toolchain for image restoration by deep reinforcement learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Wei, P., Xie, Z., Lu, H., Zhan, Z., Ye, Q., Zuo, W., & Lin, L. (2020). Component divide-and-conquer for real-world image super-resolution. In Proceedings of the European conference on computer vision (ECCV).
– reference: Wang, J., Chan, K. C., & Loy, C. C. (2023). Exploring clip for assessing the look and feel of images. In Proceedings of the AAAI conference on artificial intelligence.
– reference: Zhang, Z., Wang, Z., Lin, Z., & Qi, H. (2019). Image super-resolution by neural texture transfer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Xu, X., Ma, Y., & Sun, W. (2019). Towards real scene super-resolution with raw images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Avrahami, O., Lischinski, D., & Fried, O. (2022). Blended diffusion for text-driven editing of natural images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Gu, S., Lugmayr, A., Danelljan, M., Fritsche, M., Lamour, J., & Timofte, R. (2019). Div8k: Diverse 8k resolution image dataset. In Proceedings of the IEEE/CVF international conference on computer vision workshops (ICCV-W).
– reference: Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980
– reference: Jiang, Y., Chan, K. C., Wang, X., Loy, C. C., & Liu, Z. (2021). Robust reference-based super-resolution via c2-matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Balaji, Y., Nah, S., Huang, X., Vahdat, A., Song, J., Kreis, K., Aittala, M., Aila, T., Laine, S., Catanzaro, B., Karras, T., & Liu, M. Y. (2022). ediff-i: Text-to-image diffusion models with ensemble of expert denoisers. arXiv preprint arXiv:2211.01324
– reference: Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Timofte, R., Agustsson, E., Van Gool, L., Yang, M. H., & Zhang, L. (2017). Ntire 2017 challenge on single image super-resolution: Methods and results. In Proceedings of the IEEE/CVF international conference on computer vision workshops (CVPR-W).
– reference: Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., & Rombach, R. (2023). Sdxl: Improving latent diffusion models for high-resolution image synthesis. In Proceedings of international conference on learning representations (ICLR).
– reference: Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention (MICCAI) (pp. 234–241). Springer.
– reference: Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., & Van Gool, L. (2017). Dslr-quality photos on mobile devices with deep convolutional networks. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV)
– reference: Ke, J., Wang, Q., Wang, Y., Milanfar, P., & Yang, F. (2021). Musiq: Multi-scale image quality transformer. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
– reference: Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., & Gao, W. (2021). Pre-trained image processing transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Oord, Avd., Li, Y., & Vinyals, O. (2018). Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748
– reference: Fang, G., Ma, X., & Wang, X. (2023). Structural pruning for diffusion models. In Proceedings of advances in neural information processing systems (NeurIPS).
– reference: Wan, Z., Zhang, B., Chen, D., Zhang, P., Chen, D., Liao, J., & Wen, F. (2020). Bringing old photos back to life. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Blau, Y., & Michaeli, T. (2018). The perception-distortion tradeoff. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Sauer, A., Lorenz, D., Blattmann, A., & Rombach, R. (2023). Adversarial diffusion distillation. arXiv preprint arXiv:2311.17042
– reference: LiHYangYChangMChenSFengHXuZLiQChenYSRDiff: Single image super-resolution with diffusion probabilistic modelsNeurocomputing202266610.1016/j.neucom.2022.02.082
– reference: Song, J., Meng, C., & Ermon, S. (2020). Denoising diffusion implicit models. In Proceedings of international conference on learning representations (ICLR).
– reference: Cai, J., Zeng, H., Yong, H., Cao, Z., & Zhang, L. (2019). Toward real-world single image super-resolution: A new benchmark and a new model. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
– reference: Wang, X., Li, Y., Zhang, H., & Shan, Y. (2021b). Towards real-world blind face restoration with generative facial prior. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Salimans, T., & Ho, J. (2021). Progressive distillation for fast sampling of diffusion models. In Proceedings of international conference on learning representations (ICLR).
– reference: Deep-floyd. (2023). If. https://github.com/deep-floyd/IF
– reference: Yu, F., Gu, J., Li, Z., Hu, J., Kong, X., Wang, X., He, J., Qiao, Y., & Dong, C. (2024). Scaling up to excellence: Practicing model scaling for photo-realistic image restoration in the wild. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Yue, Z., Wang, J., & Loy, C. C. (2023). Resshift: Efficient diffusion model for image super-resolution by residual shifting. In Proceedings of advances in neural information processing systems (NeurIPS).
– reference: Choi, J., Lee, J., Shin, C., Kim, S., Kim, H., & Yoon, S. (2022). Perception prioritized training of diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Zhao, Y., Su, Y. C., Chu, C. T., Li, Y., Renn, M., Zhu, Y., Chen, C., & Jia, X. (2022). Rethinking deep face restoration. In CVPR.
– reference: Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125
– reference: Yue, Z., & Loy, C. C. (2022). Difface: Blind face restoration with diffused error contraction. arXiv preprint arXiv:2212.06512
– reference: Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., & Timofte, R. (2021). SwinIR: Image restoration using swin transformer. In Proceedings of the IEEE/CVF international conference on computer vision workshops (ICCV-W).
– reference: Gu, S., Chen, D., Bao, J., Wen, F., Zhang, B., Chen, D., Yuan, L., & Guo, B. (2022). Vector quantized diffusion model for text-to-image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Mou, C., Wang, X., Xie, L., Wu, Y., Zhang, J., Qi, Z., & Shan, Y. (2024). T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. In Proceedings of the AAAI conference on artificial intelligence.
– reference: ThorndikeELA constant error in psychological ratingsJournal of Applied Psychology1920666
– reference: Song, Y., Dhariwal, P., Chen, M., & Sutskever, I. (2023b). Consistency models. arXiv preprint arXiv:2303.01469
– reference: Howard, A., Sandler, M., Chu, G., Chen, L. C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., & Le, Q. V. (2019). Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
– reference: Hertz, A., Mokady, R., Tenenbaum, J., Aberman, K., Pritch, Y., & Cohen-Or, D. (2022). Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626
– reference: Wang, X., Xie, L., Dong, C., & Shan, Y. (2021c). Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In Proceedings of the IEEE/CVF international conference on computer vision workshops (ICCV-W).
– reference: Yang, F., Yang, H., Fu, J., Lu, H., & Guo, B. (2020). Learning texture transformer network for image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Proceedings of advances in neural information processing systems (NeurIPS).
– reference: Menon, S., Damian, A., Hu, S., Ravi, N., & Rudin, C. (2020). Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Chen, C., Shi, X., Qin, Y., Li, X., Han, X., Yang, T., & Guo, S. (2022). Real-world blind super-resolution via feature matching with implicit high-resolution priors. In Proceedings of the ACM international conference on multimedia (ACM MM).
– reference: Pan, X., Zhan, X., Dai, B., Lin, D., Loy, C. C., & Luo, P. (2021). Exploiting deep generative prior for versatile image restoration and manipulation. In IEEE transactions on pattern analysis and machine intelligence (TPAMI).
– reference: Luo, S., Tan, Y., Huang, L., Li, J., & Zhao, H. (2023). Latent consistency models: Synthesizing high-resolution images with few-step inference. arXiv preprint arXiv:2310.04378
– reference: Choi, J., Kim, S., Jeong, Y., Gwon, Y., & Yoon, S. (2021). Ilvr: Conditioning method for denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
– reference: Karras, T., Aittala, M., Aila, T., & Laine, S. (2022). Elucidating the design space of diffusion-based generative models. In Proceedings of advances in neural information processing systems (NeurIPS).
– reference: Yang, T., Ren, P., Xie, X., & Zhang, L. (2021b). Gan prior embedded network for blind face restoration in the wild. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Gal, R., Arar, M., Atzmon, Y., Bermano, A. H., Chechik, G., & Cohen-Or, D. (2023). Designing an encoder for fast personalization of text-to-image models. arXiv preprint arXiv:2302.12228
– reference: Jiménez, Á. B. (2023). Mixture of diffusers for scene composition and high resolution image generation. arXiv preprint arXiv:2302.02412
– reference: Zhang, L., Rao, A., & Agrawala, M. (2023). Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
– reference: Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., & Chen, W. (2022). Lora: Low-rank adaptation of large language models. In Proceedings of international conference on learning representations (ICLR).
– reference: Liang, J., Zeng, H., & Zhang, L. (2022). Efficient and degradation-adaptive network for real-world image super-resolution. In Proceedings of the European conference on computer vision (ECCV).
– reference: Molad, E., Horwitz, E., Valevski, D., Acha, A. R., Matias, Y., Pritch, Y., Leviathan, Y., & Hoshen, Y. (2023). Dreamix: Video diffusion models are general video editors. arXiv preprint arXiv:2302.01329
– reference: Sahak, H., Watson, D., Saharia, C., & Fleet, D. (2023). Denoising diffusion probabilistic models for robust image super-resolution in the wild. arXiv preprint arXiv:2302.07864
– reference: Wang, X., Yu, K., Dong, C., & Loy, C. C. (2018a). Recovering realistic texture in image super-resolution by deep spatial feature transform. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Maeda, S. (2020). Unpaired image super-resolution using pseudo-supervision. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Qi, C., Cun, X., Zhang, Y., Lei, C., Wang, X., Shan, Y., & Chen, Q. (2023). Fatezero: Fusing attentions for zero-shot text-based video editing. arXiv preprint arXiv:2303.09535
– reference: Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., & Shi, W. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Saharia, C., Ho, J., Chan, W., Salimans, T., Fleet, D. J., & Norouzi, M. (2022b). Image super-resolution via iterative refinement. In IEEE transactions on pattern analysis and machine intelligence (TPAMI).
– reference: Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., & Fu, Y. (2018b). Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV).
– reference: Chan, K. C., Wang, X., Xu, X., Gu, J., & Loy, C. C. (2021). GLEAN: Generative latent bank for large-factor image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Zhang, J., Lu, S., Zhan, F., & Yu, Y. (2021a). Blind image super-resolution via contrastive representation learning. arXiv preprint arXiv:2107.00708
– reference: Song, J., Vahdat, A., Mardani, M., & Kautz, J. (2023a). Pseudoinverse-guided diffusion models for inverse problems. In Proceedings of international conference on learning representations (ICLR).
– reference: Zhang, K., Liang, J., Van Gool, L., & Timofte, R. (2021b). Designing a practical degradation model for deep blind image super-resolution. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
– reference: Nichol, A. Q., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., Mcgrew, B., Sutskever, I., & Chen, M. (2022). Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In Proceedings of international conference on machine learning (ICML).
– reference: Wu, J. Z., Ge, Y., Wang, X., Lei, S. W., Gu, Y., Hsu, W., Shan, Y., Qie, X., & Shou, M. Z. (2022). Tune-A-Video: One-shot tuning of image diffusion models for text-to-video generation. arXiv preprint arXiv:2212.11565
– reference: Xu, X., Sun, D., Pan, J., Zhang, Y., Pfister, H., & Yang, M. H. (2017). Learning to super-resolve blurry face and text images. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
– reference: Zheng, H., Ji, M., Wang, H., Liu, Y., & Fang, L. (2018). Crossnet: An end-to-end reference-based super resolution network using cross-scale warping. In Proceedings of the European conference on computer vision (ECCV).
– reference: Sajjadi, M. S., Scholkopf, B., & Hirsch, M. (2017). Enhancenet: Single image super-resolution through automated texture synthesis. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
– reference: Gu, J., Shen, Y., & Zhou, B. (2020). Image processing using multi-code gan prior. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., & Ganguli, S. (2015). Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of international conference on machine learning (ICML).
– reference: Dai, T., Cai, J., Zhang, Y., Xia, S. T., & Zhang, L. (2019). Second-order attention network for single image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: He, X., Mo, Z., Wang, P., Liu, Y., Yang, M., & Cheng, J. (2019). Ode-inspired network design for single image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., & Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
– reference: Feng, W., He, X., Fu, T. J., Jampani, V., Akula, A., Narayana, P., Basu, S., Wang, X. E., & Wang, W. Y. (2023). Training-free structured diffusion guidance for compositional text-to-image synthesis. In Proceedings of international conference on learning representations (ICLR).
– reference: Wang, Y., Yu, J., & Zhang, J. (2022). Zero-shot image restoration using denoising diffusion null-space model. In Proceedings of international conference on learning representations (ICLR).
– reference: Yang, S., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., & Poole, B. (2021a). Score-based generative modeling through stochastic differential equations. In Proceedings of international conference on learning representations (ICLR).
– reference: Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., & Sutskever, I. (2021). Zero-shot text-to-image generation. In Proceedings of international conference on machine learning (ICML).
– reference: Dong, C., Loy, C. C., & Tang, X. (2016). Accelerating the super-resolution convolutional neural network. In Proceedings of the European conference on computer vision (ECCV)
– reference: Zhou, S., Zhang, J., Zuo, W., & Loy, C. C. (2020). Cross-scale internal graph neural network for image super-resolution. In Proceedings of advances in neural information processing systems (NeurIPS).
– reference: Agustsson, E., & Timofte, R. (2017). Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE/CVF international conference on computer vision workshops (CVPR-W).
– reference: Fritsche, M., Gu, S., & Timofte, R. (2019). Frequency separation for real-world super-resolution. In Proceedings of the IEEE/CVF international conference on computer vision workshops (ICCV-W).
– reference: Dong, C., Loy, C. C., He, K., & Tang, X. (2014). Learning a deep convolutional network for image super-resolution. In Proceedings of the European conference on computer vision (ECCV).
– reference: Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018a). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Chung, H., Sim, B., Ryu, D., & Ye, J. C. (2022). Improving diffusion models for inverse problems using manifold constraints. In Proceedings of advances in neural information processing systems (NeurIPS).
– reference: Wei, Y., Gu, S., Li, Y., Timofte, R., & Jin, L., Song, H. (2021). Unsupervised real-world image super resolution via domain-distance aware training. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Wang, L., Wang, Y., Dong, X., Xu, Q., Yang, J., An, W., & Guo, Y. (2021a). Unsupervised degradation representation learning for blind super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Meng, X., & Kabashima, Y. (2022). Diffusion model based posterior sampling for noisy linear inverse problems. arXiv preprint arXiv:2211.12343
– reference: Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– ident: 2168_CR31
  doi: 10.1109/ICCV.2019.00140
– ident: 2168_CR98
  doi: 10.1109/CVPR.2018.00068
– ident: 2168_CR90
  doi: 10.1109/CVPR46437.2021.00073
– ident: 2168_CR79
  doi: 10.1109/ICCVW54120.2021.00217
– ident: 2168_CR81
  doi: 10.1007/978-3-030-11021-5_5
– ident: 2168_CR22
  doi: 10.1145/3610548.3618173
– ident: 2168_CR74
  doi: 10.1109/CVPRW.2017.150
– ident: 2168_CR24
  doi: 10.1109/CVPR52688.2022.01043
– ident: 2168_CR27
– ident: 2168_CR11
  doi: 10.1109/CVPR52688.2022.01118
– ident: 2168_CR96
  doi: 10.1109/ICCV48922.2021.00475
– ident: 2168_CR71
– ident: 2168_CR91
  doi: 10.1109/CVPR52733.2024.02425
– ident: 2168_CR23
  doi: 10.1109/CVPR42600.2020.00308
– ident: 2168_CR6
  doi: 10.1109/CVPR46437.2021.01402
– ident: 2168_CR47
– ident: 2168_CR68
  doi: 10.1007/978-3-031-73016-0_6
– ident: 2168_CR60
– ident: 2168_CR53
  doi: 10.1609/aaai.v38i5.28226
– ident: 2168_CR15
– ident: 2168_CR21
  doi: 10.1109/ICCVW.2019.00445
– ident: 2168_CR105
  doi: 10.1109/ICCV.2017.244
– ident: 2168_CR1
  doi: 10.1109/CVPRW.2017.150
– ident: 2168_CR87
  doi: 10.1109/ICCV.2017.36
– ident: 2168_CR101
  doi: 10.1109/CVPR52688.2022.00750
– ident: 2168_CR8
  doi: 10.1145/3503161.3547833
– volume: 6
  start-page: 66
  year: 1920
  ident: 2168_CR73
  publication-title: Journal of Applied Psychology
– ident: 2168_CR36
– ident: 2168_CR17
  doi: 10.1109/TPAMI.2015.2439281
– ident: 2168_CR67
– ident: 2168_CR38
  doi: 10.1109/CVPR.2019.00453
– ident: 2168_CR40
– ident: 2168_CR88
  doi: 10.1109/CVPR42600.2020.00583
– ident: 2168_CR48
– ident: 2168_CR3
– ident: 2168_CR33
  doi: 10.1109/ICCV.2017.355
– ident: 2168_CR103
– ident: 2168_CR35
  doi: 10.1109/CVPR46437.2021.00214
– ident: 2168_CR54
– ident: 2168_CR62
  doi: 10.1007/978-3-319-24574-4_28
– ident: 2168_CR45
– ident: 2168_CR51
  doi: 10.1109/CVPR42600.2020.00251
– ident: 2168_CR20
– ident: 2168_CR93
– ident: 2168_CR99
  doi: 10.1007/978-3-030-01234-2_18
– ident: 2168_CR75
  doi: 10.1109/CVPR42600.2020.00282
– ident: 2168_CR28
– ident: 2168_CR82
– ident: 2168_CR59
– ident: 2168_CR61
  doi: 10.1109/CVPR52688.2022.01042
– ident: 2168_CR97
  doi: 10.1109/ICCV51070.2023.00355
– ident: 2168_CR44
  doi: 10.1007/978-3-031-19797-0_33
– ident: 2168_CR4
  doi: 10.1109/CVPR.2018.00652
– ident: 2168_CR69
– ident: 2168_CR5
  doi: 10.1109/ICCV.2019.00318
– ident: 2168_CR84
  doi: 10.1109/CVPR46437.2021.01318
– ident: 2168_CR7
  doi: 10.1109/TPAMI.2022.3186715
– ident: 2168_CR83
  doi: 10.1007/978-3-030-58598-3_7
– ident: 2168_CR9
  doi: 10.1109/CVPR46437.2021.01212
– ident: 2168_CR86
  doi: 10.1109/CVPR.2019.00182
– ident: 2168_CR26
  doi: 10.1109/CVPR.2019.00183
– volume: 6
  start-page: 66
  year: 2022
  ident: 2168_CR42
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2022.02.082
– ident: 2168_CR65
  doi: 10.1109/TPAMI.2022.3204461
– ident: 2168_CR37
– ident: 2168_CR52
– ident: 2168_CR64
  doi: 10.1145/3528233.3530757
– ident: 2168_CR70
– ident: 2168_CR102
  doi: 10.1007/978-3-030-01231-1_6
– ident: 2168_CR66
  doi: 10.1109/ICCV.2017.481
– ident: 2168_CR95
– ident: 2168_CR89
– ident: 2168_CR10
  doi: 10.1109/ICCV48922.2021.01410
– ident: 2168_CR57
– ident: 2168_CR32
– ident: 2168_CR78
  doi: 10.1109/CVPR46437.2021.00905
– ident: 2168_CR14
  doi: 10.1109/CVPR.2019.01132
– ident: 2168_CR39
  doi: 10.1109/ICCV48922.2021.00510
– ident: 2168_CR16
  doi: 10.1007/978-3-319-10593-2_13
– ident: 2168_CR19
– ident: 2168_CR18
  doi: 10.1007/978-3-319-46475-6_25
– ident: 2168_CR43
  doi: 10.1109/ICCVW54120.2021.00210
– ident: 2168_CR85
  doi: 10.1109/ICCV51070.2023.00701
– ident: 2168_CR25
  doi: 10.1109/ICCVW.2019.00435
– ident: 2168_CR63
– ident: 2168_CR100
  doi: 10.1109/CVPR.2019.00817
– ident: 2168_CR29
– ident: 2168_CR41
  doi: 10.1109/CVPR.2017.19
– ident: 2168_CR2
  doi: 10.1109/CVPR52688.2022.01767
– ident: 2168_CR34
  doi: 10.1109/CVPRW50498.2020.00241
– ident: 2168_CR50
– ident: 2168_CR77
  doi: 10.1109/CVPR46437.2021.01044
– ident: 2168_CR80
  doi: 10.1109/CVPR.2018.00070
– ident: 2168_CR76
  doi: 10.1609/aaai.v37i2.25353
– ident: 2168_CR49
  doi: 10.1109/CVPR42600.2020.00037
– ident: 2168_CR56
  doi: 10.1109/TPAMI.2021.3115428
– ident: 2168_CR104
– ident: 2168_CR12
  doi: 10.1109/CVPR.2017.195
– ident: 2168_CR46
  doi: 10.1109/ICCV48922.2021.00986
– ident: 2168_CR58
  doi: 10.1109/ICCV51070.2023.01460
– ident: 2168_CR13
– ident: 2168_CR92
  doi: 10.1109/CVPR.2018.00259
– ident: 2168_CR30
– ident: 2168_CR55
– ident: 2168_CR72
– ident: 2168_CR94
  doi: 10.1109/TPAMI.2024.3461721
SSID ssj0002823
Score 2.74463
Snippet We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution. Specifically, by...
SourceID proquest
gale
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 5929
SubjectTerms Artificial Intelligence
Computer Imaging
Computer Science
Computer vision
Controllability
Image Processing and Computer Vision
Image resolution
Methods
Pattern Recognition
Pattern Recognition and Graphics
Vision
SummonAdditionalLinks – databaseName: SpringerLINK Contemporary 1997-Present
  dbid: RSV
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1JS8QwFA5uBy_u4rgRRPCggTZdktwUF_Qiw4yKt9CkrRR0Zmin_n5fMukMdQM99NK8tunrey9f-jaEjiPFVKSTiEQqEcQU8CIqZpr4TPHUeB0pz22zCXZ_z5-fRdclhVVNtHvjkrSWepbs5lPrcwzh8GNO2DxahOWOm4YNvf7T1P7CJmLSQB42RlEsfJcq8_09WsvRZ6P8xTtqF52b1f9Ndw2tOJCJLyZSsY7mssEGWnWAEzt1ruBU09OhObeJzm1QXmGCofFVkee1-ZuGu2UxLDHgW9wDYElsBA6-ewNbhPv1KCuJ8QJMZHgLPd5cP1zeEtdlgeiQemNC8whAEZhs4acMwIBiXsa0EoBrqApFxhKRAgoIlJf6isehprEKtUgYVwGLBA-20cJgOMh2EA5S4Smd0ZQrBcAs5XEqmArzWMOmM2S6g_yG2VK7EuSmE8arnBVPNlyTwDVpuSZZB51OrxlNCnD8Sn1kvqE0lS0GJnTmJamrSt71e_KCA_T0PS6A6MQR5UN4vE5cJgK8hCmG1aLcb1GC6un2cCMq0ql-JQNY8WEspLSDzhrRmA3_PPfdv5HvoWVqpMuG1uyjhXFZZwdoSb-Pi6o8tCrxAb5HALw
  priority: 102
  providerName: Springer Nature
Title Exploiting Diffusion Prior for Real-World Image Super-Resolution
URI https://link.springer.com/article/10.1007/s11263-024-02168-7
https://www.proquest.com/docview/3128897422
Volume 132
WOSCitedRecordID wos001269395000002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAVX
  databaseName: SpringerLINK Contemporary 1997-Present
  customDbUrl:
  eissn: 1573-1405
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0002823
  issn: 0920-5691
  databaseCode: RSV
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwEB7RlgMXWl7q9rGyEBIHsEich-0TfYsKsVrt8ihcrNhO0Eqwu012-_sZO05XW0QvHBIp9iRxMuPxZ894BuBVprnOTJHRTBeSugBeVOfc0JhrYZ3VkYnKJ5vgg4G4upLDsODWBLfKTid6RW1nxq2Rv0tQkQoEv4y9n19TlzXKWVdDCo0N2HLIxrn0fYpObzUxTifaVPI4RcpyGYdNM-3WuZh5C2aKR5wLytcGprvq-S87qR9-Lrb_t-E78DgAT3LcSsoTeFBOn8J2AKEkdPEGi7o8D13ZMzjyjnoT5yBNziZVtXQrbGRYT2Y1QcxLRgg2qffKIZe_UT-R8XJe1tRZBlq5fg5fLs4_n36gIfMCNSmLFpRVGQIlVOMythwBguZRyY2WiHWYTmXJC2kRGSQ6srEWeWpYrlMjCy50wjMpkhewOZ1Ny10giZWRNiWzQmsEa1bkVnKdVrnBiWjKTQ_i7rcrE8KSu-wYv9QqoLJjlUJWKc8qxXvw5vaeeRuU417ql46bykW7mDp3mp_FsmnU5XikjgXC0ThC9vTgdSCqZvh6U4TdCfgRLkDWGuXBGiV2R7Ne3YmBCuqgUSsZ6MHbTpBW1f9u-979T9uHR8yJsHevOYDNRb0sD-GhuVlMmroPG_zb9z5snZwPhiO8-shp33cQPA-zH3gejb_-ASgCEDc
linkProvider ProQuest
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9QwEB6VggQXyqvqlhYsBOIAFonzsH1ApaJUXW1ZVW2RejOxnaCVYHeb7LbiT_EbGTtOVwuitx445JI4zsPffDP2jGcAXmaa68wUGc10IalL4EV1zg2NuRbWeR2ZqHyxCT4cirMzebQCv7q9MC6ssuNET9R2Ytwa-bsEiVSg8cvYzvScuqpRzrvaldBoYTEof17ilK1539_D8X3F2P6n048HNFQVoCZl0YyyKkMjAClKxpaj8tM8KrnREvU406kseSEtar1ERzbWIk8Ny3VqZMGFTngmRYL93oLbaSK4k6sBp1fMj9OXtnQ9TsmyXMZhk067VS9m3mOa4hHngvIlRfinOvjLL-vV3f7a__ajHsD9YFiT3VYSHsJKOX4Ea8HIJoHCGjzV1bHozj2GDz4QceQCwMneqKrmbgWRHNWjSU3QpifHaExTH3VE-j-Qf8nJfFrW1Hk-Wrl9Al9u5NPWYXU8GZcbQBIrI21KZoXWaIxakVvJdVrlBifaKTc9iLthViakXXfVP76rRcJoBw2F0FAeGor34M3VPdM26ci1rV849CiXzWPswoW-FfOmUf2TY7Ur0NyOI4RDD16HRtUEH2-KsPsCP8IlAFtqubXUEunGLF_uYKcC3TVqgbkevO2Au7j873ffvL6353D34PTzoTrsDwdP4R5z4uNDibZgdVbPy224Yy5mo6Z-5gWRwNebBvRvKhlk6g
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9QwEB6VglAvLU91SwELgTiA1cR52D4gqFhWrIpWqxakiouJ7aRaqewuyS6Iv8avY-w4XS2I3nrgkIszceL48zw84xmAp5nmOjNFRjNdSOoSeFGdc0NjroV1XkcmKl9sgo9G4vRUjjfgV3cWxoVVdjzRM2o7M26P_CBBRipQ-WXsoAphEeP-4PX8G3UVpJyntSun0ULkqPz5A8235tWwj3P9jLHBu49v39NQYYCalEULyqoMFQJkVzK2HAWh5lHJjZYo05lOZckLaVECJjqysRZ5aliuUyMLLnTCMykS7PcaXOdoY7pwwnH2-UIKoCnTlrFH8yzLZRwO7LTH9mLmvacpXnEuKF8Tin-Khr98tF70DXb-5592C7aDwk0O2xVyGzbK6R3YCco3CaytwaauvkXXdhfe-ADFiQsMJ_1JVS3dziIZ15NZTVDXJ8eoZFMfjUSGX5Evk5PlvKyp84i06_kefLqSod2HzelsWu4CSayMtCmZFVqjkmpFbiXXaZUbNMBTbnoQd1OuTEjH7qqCnKtVImkHE4UwUR4mivfgxcUz8zYZyaXUTxySlMvyMXVTf1Ysm0YNT47VoUA1PI4QGj14HoiqGb7eFOFUBg7CJQZbo9xfo0Q2ZNZvdxBUgQ02aoW_HrzsQLy6_e9v37u8t8dwE3GsPgxHRw9gi7mV5COM9mFzUS_Lh3DDfF9MmvqRX5MEvlw1nn8D6dZuDg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Exploiting+Diffusion+Prior+for+Real-World+Image+Super-Resolution&rft.jtitle=International+journal+of+computer+vision&rft.au=Wang%2C+Jianyi&rft.au=Yue%2C+Zongsheng&rft.au=Zhou%2C+Shangchen&rft.au=Chan%2C+Kelvin+C.+K&rft.date=2024-12-01&rft.pub=Springer&rft.issn=0920-5691&rft.volume=132&rft.issue=12&rft.spage=5929&rft_id=info:doi/10.1007%2Fs11263-024-02168-7&rft.externalDocID=A816310897
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0920-5691&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0920-5691&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0920-5691&client=summon