Rethinking cross-domain semantic relation for few-shot image generation

Training well-performing Generative Adversarial Networks (GANs) with limited data has always been challenging. Existing methods either require sufficient data (over 100 training images) for training or generate images of low quality and low diversity. To solve this problem, we propose a new Cross-do...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Applied intelligence (Dordrecht, Netherlands) Ročník 53; číslo 19; s. 22391 - 22404
Hlavní autori: Gou, Yao, Li, Min, Lv, Yilong, Zhang, Yusen, Xing, Yuhang, He, Yujie
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York Springer US 01.10.2023
Springer Nature B.V
Predmet:
ISSN:0924-669X, 1573-7497
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Training well-performing Generative Adversarial Networks (GANs) with limited data has always been challenging. Existing methods either require sufficient data (over 100 training images) for training or generate images of low quality and low diversity. To solve this problem, we propose a new Cross-domain Semantic Relation (CSR) loss. The CSR loss improves the performance of the generative model by maintaining the relationship between instances in the source domain and generated images. At the same time, a perceptual similarity loss and a discriminative contrastive loss are designed to further enrich the diversity of generated images and stabilize the training process of models. Experiments on nine publicly available few-shot datasets and comparisons with the current nine methods show that our approach is superior to all baseline methods. Finally, we perform ablation studies on the proposed three loss functions and prove that these three loss functions are essential for few-shot image generation tasks. Code is available at https://github.com/gouayao/CSR .
AbstractList Training well-performing Generative Adversarial Networks (GANs) with limited data has always been challenging. Existing methods either require sufficient data (over 100 training images) for training or generate images of low quality and low diversity. To solve this problem, we propose a new Cross-domain Semantic Relation (CSR) loss. The CSR loss improves the performance of the generative model by maintaining the relationship between instances in the source domain and generated images. At the same time, a perceptual similarity loss and a discriminative contrastive loss are designed to further enrich the diversity of generated images and stabilize the training process of models. Experiments on nine publicly available few-shot datasets and comparisons with the current nine methods show that our approach is superior to all baseline methods. Finally, we perform ablation studies on the proposed three loss functions and prove that these three loss functions are essential for few-shot image generation tasks. Code is available at https://github.com/gouayao/CSR .
Training well-performing Generative Adversarial Networks (GANs) with limited data has always been challenging. Existing methods either require sufficient data (over 100 training images) for training or generate images of low quality and low diversity. To solve this problem, we propose a new Cross-domain Semantic Relation (CSR) loss. The CSR loss improves the performance of the generative model by maintaining the relationship between instances in the source domain and generated images. At the same time, a perceptual similarity loss and a discriminative contrastive loss are designed to further enrich the diversity of generated images and stabilize the training process of models. Experiments on nine publicly available few-shot datasets and comparisons with the current nine methods show that our approach is superior to all baseline methods. Finally, we perform ablation studies on the proposed three loss functions and prove that these three loss functions are essential for few-shot image generation tasks. Code is available at https://github.com/gouayao/CSR.
Author He, Yujie
Li, Min
Zhang, Yusen
Gou, Yao
Xing, Yuhang
Lv, Yilong
Author_xml – sequence: 1
  givenname: Yao
  orcidid: 0000-0003-0105-3377
  surname: Gou
  fullname: Gou, Yao
  organization: Xi’an High-Tech Research Institute
– sequence: 2
  givenname: Min
  surname: Li
  fullname: Li, Min
  organization: Xi’an High-Tech Research Institute
– sequence: 3
  givenname: Yilong
  surname: Lv
  fullname: Lv, Yilong
  organization: Xi’an High-Tech Research Institute
– sequence: 4
  givenname: Yusen
  surname: Zhang
  fullname: Zhang, Yusen
  organization: Xi’an High-Tech Research Institute
– sequence: 5
  givenname: Yuhang
  surname: Xing
  fullname: Xing, Yuhang
  organization: Xi’an High-Tech Research Institute
– sequence: 6
  givenname: Yujie
  orcidid: 0000-0002-2299-4945
  surname: He
  fullname: He, Yujie
  email: ksy5201314@163.com
  organization: Xi’an High-Tech Research Institute
BookMark eNp9kE9LAzEQxYNUsK1-AU8LnqOTP7tJjlK0CgVBFLyFdJvdprZJTVLEb--6KwgeehqGmd-8N2-CRj54i9AlgWsCIG4SAS4VBsow8AoolidoTErBsOBKjNAYFOW4qtTbGZqktAEAxoCM0fzZ5rXz7863RR1DSngVdsb5Itmd8dnVRbRbk13wRRNi0dhPnNYhF25nWlu01tvYT8_RaWO2yV781il6vb97mT3gxdP8cXa7wDUjKmNWgSWElWD5UtSCCUnBAjFlI7rWSABFVAm0pKtKSc6taQwnTFaqXooKVmyKroa7-xg-DjZlvQmH6DtJTaWQZQlEkm6LDlv9S9E2eh87x_FLE9A_gekhMN0FpvvAtOwg-Q-qXe6fy9G47XGUDWjqdHxr45-rI9Q3ojKAiw
CitedBy_id crossref_primary_10_1007_s11042_024_20517_z
crossref_primary_10_1016_j_neunet_2025_107862
crossref_primary_10_1007_s11263_025_02357_y
crossref_primary_10_1007_s10489_025_06379_4
Cites_doi 10.1007/978-3-031-19787-1_15
10.1145/3306346.3322984
10.1109/ICCV48922.2021.00835
10.1007/978-3-030-11021-5_5
10.1007/978-3-030-58545-7_19
10.1609/aaai.v35i12.17249
10.1007/978-3-319-46487-9_40
10.1109/CVPR.2017.19
10.1109/CVPR.2018.00393
10.1109/CVPR42600.2020.00611
10.1109/ICCV48922.2021.01411
10.1109/CVPR46437.2021.01060
10.1109/CVPR52688.2022.00557
10.1109/ICCV48922.2021.01388
10.1109/ICCV.2017.244
10.1007/978-3-031-19784-0_1
10.1109/ICCV.2019.00284
10.1109/CVPRW.2019.00268
10.1109/CVPR42600.2020.00935
10.1007/s40747-022-00924-1
10.1109/CVPR42600.2020.00975
10.1007/978-3-319-46475-6_43
10.1145/3394171.3413561
10.1109/CVPR46437.2021.01611
10.1109/TPAMI.2008.222
10.1109/CVPR42600.2020.00813
10.1109/CVPR.2018.00068
10.1109/CVPR52688.2022.01772
10.1109/CVPR.2019.00453
10.1109/CVPR46437.2021.00783
10.1109/CVPR52688.2022.01092
10.1007/978-3-030-58548-8_4
10.1109/CVPR46437.2021.01198
10.1007/978-3-031-19784-0_33
10.1109/CVPR46437.2021.01402
10.1109/CVPR42600.2020.00533
10.1109/CVPR46437.2021.01477
10.1007/978-3-030-01231-1_14
10.1109/CVPR.2017.632
10.1109/CVPR52688.2022.00757
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Copyright_xml – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
DBID AAYXX
CITATION
3V.
7SC
7WY
7WZ
7XB
87Z
8AL
8FD
8FE
8FG
8FK
8FL
ABJCF
ABUWG
AFKRA
ARAPS
AZQEC
BENPR
BEZIV
BGLVJ
CCPQU
DWQXO
FRNLG
F~G
GNUQQ
HCIFZ
JQ2
K60
K6~
K7-
L.-
L6V
L7M
L~C
L~D
M0C
M0N
M7S
P5Z
P62
PHGZM
PHGZT
PKEHL
PQBIZ
PQBZA
PQEST
PQGLB
PQQKQ
PQUKI
PSYQQ
PTHSS
Q9U
DOI 10.1007/s10489-023-04602-8
DatabaseName CrossRef
ProQuest Central (Corporate)
Computer and Information Systems Abstracts
ABI/INFORM Collection
ABI/INFORM Global (PDF only)
ProQuest Central (purchase pre-March 2016)
ABI/INFORM Collection
Computing Database (Alumni Edition)
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central (Alumni) (purchase pre-March 2016)
ABI/INFORM Collection (Alumni Edition)
Materials Science & Engineering Collection
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
Advanced Technologies & Computer Science Collection
ProQuest Central Essentials
ProQuest Central
Business Premium Collection
Technology Collection
ProQuest One Community College
ProQuest Central
Business Premium Collection (Alumni)
ABI/INFORM Global (Corporate)
ProQuest Central Student
SciTech Premium Collection
ProQuest Computer Science Collection
ProQuest Business Collection (Alumni Edition)
ProQuest Business Collection
Computer Science Database
ABI/INFORM Professional Advanced
ProQuest Engineering Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
ABI/INFORM Global
Computing Database
Engineering Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Premium
ProQuest One Academic
ProQuest One Academic Middle East (New)
ProQuest One Business
ProQuest One Business (Alumni)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest One Psychology
Engineering Collection
ProQuest Central Basic
DatabaseTitle CrossRef
ABI/INFORM Global (Corporate)
ProQuest Business Collection (Alumni Edition)
ProQuest One Business
ProQuest One Psychology
Computer Science Database
ProQuest Central Student
Technology Collection
Technology Research Database
Computer and Information Systems Abstracts – Academic
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ABI/INFORM Complete
ProQuest Central
ABI/INFORM Professional Advanced
ProQuest One Applied & Life Sciences
ProQuest Engineering Collection
ProQuest Central Korea
ProQuest Central (New)
Advanced Technologies Database with Aerospace
ABI/INFORM Complete (Alumni Edition)
Engineering Collection
Advanced Technologies & Aerospace Collection
Business Premium Collection
ABI/INFORM Global
ProQuest Computing
Engineering Database
ABI/INFORM Global (Alumni Edition)
ProQuest Central Basic
ProQuest Computing (Alumni Edition)
ProQuest One Academic Eastern Edition
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Business Collection
Computer and Information Systems Abstracts Professional
Advanced Technologies & Aerospace Database
ProQuest One Academic UKI Edition
Materials Science & Engineering Collection
ProQuest One Business (Alumni)
ProQuest One Academic
ProQuest Central (Alumni)
ProQuest One Academic (New)
Business Premium Collection (Alumni)
DatabaseTitleList
ABI/INFORM Global (Corporate)
Database_xml – sequence: 1
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1573-7497
EndPage 22404
ExternalDocumentID 10_1007_s10489_023_04602_8
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 62006240
  funderid: http://dx.doi.org/10.13039/501100001809
GroupedDBID -4Z
-59
-5G
-BR
-EM
-Y2
-~C
-~X
.86
.DC
.VR
06D
0R~
0VY
1N0
1SB
2.D
203
23M
28-
2J2
2JN
2JY
2KG
2LR
2P1
2VQ
2~H
30V
3V.
4.4
406
408
409
40D
40E
5GY
5QI
5VS
67Z
6NX
77K
7WY
8FE
8FG
8FL
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDZT
ABECU
ABFTV
ABHLI
ABHQN
ABIVO
ABJCF
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTAH
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABUWG
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFS
ACHSB
ACHXU
ACIWK
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACSNA
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
AZQEC
B-.
BA0
BBWZM
BDATZ
BENPR
BEZIV
BGLVJ
BGNMA
BPHCQ
BSONS
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DWQXO
EBLON
EBS
EIOEI
EJD
ESBYG
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRNLG
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNUQQ
GNWQR
GQ6
GQ7
GQ8
GROUPED_ABI_INFORM_COMPLETE
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I09
IHE
IJ-
IKXTQ
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K60
K6V
K6~
K7-
KDC
KOV
KOW
L6V
LAK
LLZTM
M0C
M0N
M4Y
M7S
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
OVD
P19
P2P
P62
P9O
PF0
PQBIZ
PQBZA
PQQKQ
PROAC
PSYQQ
PT4
PT5
PTHSS
Q2X
QOK
QOS
R4E
R89
R9I
RHV
RNI
RNS
ROL
RPX
RSV
RZC
RZE
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TEORI
TSG
TSK
TSV
TUC
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
Z7R
Z7X
Z7Z
Z81
Z83
Z88
Z8M
Z8N
Z8R
Z8T
Z8U
Z8W
Z92
ZMTXR
ZY4
~A9
~EX
77I
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ABRTQ
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFFHD
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
PHGZM
PHGZT
PQGLB
7SC
7XB
8AL
8FD
8FK
JQ2
L.-
L7M
L~C
L~D
PKEHL
PQEST
PQUKI
Q9U
ID FETCH-LOGICAL-c319t-360e11350e4b7c737820e01a5f77c7a80091950252d69844eafa413869cb760d3
IEDL.DBID 7WY
ISICitedReferencesCount 4
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001021078700002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0924-669X
IngestDate Wed Nov 05 15:03:07 EST 2025
Sat Nov 29 05:33:36 EST 2025
Tue Nov 18 22:31:21 EST 2025
Fri Feb 21 02:42:21 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 19
Keywords Cross-domain semantic relation
Generative adversarial networks
Few-shot image generation
Perceptual similarity
Contrastive learning
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c319t-360e11350e4b7c737820e01a5f77c7a80091950252d69844eafa413869cb760d3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-2299-4945
0000-0003-0105-3377
PQID 2878550181
PQPubID 326365
PageCount 14
ParticipantIDs proquest_journals_2878550181
crossref_primary_10_1007_s10489_023_04602_8
crossref_citationtrail_10_1007_s10489_023_04602_8
springer_journals_10_1007_s10489_023_04602_8
PublicationCentury 2000
PublicationDate 20231000
2023-10-00
20231001
PublicationDateYYYYMMDD 2023-10-01
PublicationDate_xml – month: 10
  year: 2023
  text: 20231000
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
– name: Boston
PublicationSubtitle The International Journal of Research on Intelligent Systems for Real Life Complex Problems
PublicationTitle Applied intelligence (Dordrecht, Netherlands)
PublicationTitleAbbrev Appl Intell
PublicationYear 2023
Publisher Springer US
Springer Nature B.V
Publisher_xml – name: Springer US
– name: Springer Nature B.V
References ParkTZhuJYWangOLuJShechtmanEEfrosAZhangRSwapping autoencoder for deep image manipulationAdvances in Neural Information Processing Systems20203371987211
Wang Y, Wu C, Herranz L, van de Weijer J, Gonzalez-Garcia A, Raducanu B (2018b) Transferring gans: generating images from limited data. In: Proceedings of the european conference on computer vision (ECCV), pp 218–234
Karras T, Laine S, Aila T (2019) A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 4401–4410
Mo S, Cho M, Shin J (2020) Freeze the discriminator: a simple baseline for fine-tuning gans. arXiv preprint http://arxiv.org/abs/2002.10964
Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30
Jung C, Kwon G, Ye JC (2022) Exploring patch-wise semantic relation for contrastive learning in image-to-image translation tasks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 18260–18269
Wang X, Yu K, Wu S, Gu J, Liu Y, Dong C, Qiao Y, Change Loy C (2018a) Esrgan: Enhanced super-resolution generative adversarial networks. In: Proceedings of the european conference on computer vision (ECCV) workshops, pp 0–0
Yu F, Seff A, Zhang Y, Song S, Funkhouser T, Xiao J (2015) Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint http://arxiv.org/abs/1506.03365
Li M, Lin J, Ding Y, Liu Z, Zhu JY, Han S (2020a) Gan compression: Efficient architectures for interactive conditional gans. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5284–5294
Brock A, Donahue J, Simonyan K (2018) Large scale gan training for high fidelity natural image synthesis. arXiv preprint http://arxiv.org/abs/1809.11096
Kingma DP, Ba J (2014) Adam: A method for stochastic optimization. arXiv preprint http://arxiv.org/abs/1412.6980
Frühstück A, Singh KK, Shechtman E, Mitra NJ, Wonka P, Lu J (2022) Insetgan for full-body image generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7723–7732
Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J, Aila T (2020b) Analyzing and improving the image quality of stylegan. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8110–8119
Cao J, Hou L, Yang MH, He R, Sun Z (2021) Remix: Towards image-to-image translation with limited data. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 15018–15027
Karras T, Aittala M, Laine S, Härkönen E, Hellsten J, Lehtinen J, Aila T (2021) Alias-free generative adversarial networks. Advances in Neural Information Processing Systems 34
Liu R, Ge Y, Choi CL, Wang X, Li H (2021a) Divco: Diverse conditional image synthesis via contrastive generative adversarial network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 16377–16386
Hong Y, Niu L, Zhang J, Zhao W, Fu C, Zhang L (2020) F2gan: Fusing-and-filling gan for few-shot image generation. In: Proceedings of the 28th ACM international conference on multimedia, pp 2535–2543
Chong MJ, Forsyth D (2020) Effectively unbiased fid and inception score and where to find them. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6070–6079
Jeong J, Shin J (2021) Training gans with stronger augmentations via contrastive discriminator. In: International Conference on Learning Representations
Yang M, Wang Z, Chi Z, Feng W (2022) Wavegan: Frequency-aware gan for high-fidelity few-shot image generation. In: Computer vision–ECCV 2022: 17th european conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XV, Springer, pp 1–17
Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018) The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 586–595
Liang J, Zeng H, Zhang L (2022) Details or artifacts: A locally discriminative learning approach to realistic image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5657–5666
Kong C, Kim J, Han D, Kwak N (2022) Few-shot image generation with mixup-based distance learning. In: Computer vision–ECCV 2022: 17th european conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XV, Springer, pp 563–580
Zhu JY, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223–2232
KarrasTAittalaMHellstenJLaineSLehtinenJAilaTTraining generative adversarial networks with limited dataAdvances in Neural Information Processing Systems2020331210412114
WangXTangXFace photo-sketch synthesis and recognitionIEEE transactions on pattern analysis and machine intelligence200831111955196710.1109/TPAMI.2008.222
Chen T, Kornblith S, Norouzi M, Hinton G (2020) A simple framework for contrastive learning of visual representations. In: International conference on machine learning, PMLR, pp 1597–1607
Chan KC, Wang X, Xu X, Gu J, Loy CC (2021) Glean: Generative latent bank for large-factor image super-resolution. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 14245–14254
Wang H, Gui S, Yang H, Liu J, Wang Z (2020a) Gan slimming: All-in-one gan compression by a unified optimization framework. In: European conference on computer vision, Springer, pp 54–73
Benaim S, Wolf L (2017) One-sided unsupervised domain mapping. Advances in neural information processing systems 30
LiYZhangRLuJCShechtmanEFew-shot image generation with elastic weight consolidationAdvances in Neural Information Processing Systems2020331588515896
Park T, Efros AA, Zhang R, Zhu JY (2020a) Contrastive learning for unpaired image-to-image translation. In: European conference on computer vision, Springer, pp 319–345
Zhang R, Isola P, Efros AA (2016) Colorful image colorization. In: European conference on computer vision, Springer, pp 649–666
Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training gans. Advances in neural information processing systems 29
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. Advances in neural information processing systems 27
Skorokhodov I, Sotnikov G, Elhoseiny M (2021) Aligning latent and image spaces to connect the unconnectable. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 14144–14153
Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: European conference on computer vision, Springer, pp 694–711
He K, Fan H, Wu Y, Xie S, Girshick R (2020) Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9729–9738
KangMParkJContragan: Contrastive learning for conditional image generationAdvances in Neural Information Processing Systems2020332135721369
Van den Oord A, Li Y, Vinyals O (2018) Representation learning with contrastive predictive coding. arXiv e-prints pp arXiv–1807
Tseng HY, Jiang L, Liu C, Yang MH, Yang W (2021) Regularizing generative adversarial networks under limited data. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7921–7931
Yaniv J, Newman Y, Shamir A (2019) The face of art: landmark detection and geometric style in portraits. ACM Transactions on graphics (TOG) 38(4):1–15
Noguchi A, Harada T (2019) Image generation from small datasets via batch statistics adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 2750–2758
Wu Z, Xiong Y, Yu SX, Lin D (2018) Unsupervised feature learning via non-parametric instance discrimination. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3733–3742
Hong Y, Niu L, Zhang J, Zhang L (2022) Deltagan: Towards diverse few-shot image generation with sample-specific delta. In: Computer Vision–ECCV 2022: 17th european conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XVI, Springer, pp 259–276
Liu Y, Shu Z, Li Y, Lin Z, Perazzi F, Kung SY (2021b) Content-aware gan compression. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12156–12166
Xie J, Zheng Z, Fang X, Zhu SC, Wu YN (2021) Learning cycle-consistent cooperative networks via alternating mcmc teaching for unsupervised cross-domain translation. In: The Thirty-Fifth AAAI conference on artificial intelligence (AAAI)
Wu Y, Wang X, Li Y, Zhang H, Zhao X, Shan Y (2021) Towards vivid and diverse image colorization with generative color prior. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 14377–14386
Ozbulak G (2019) Image colorization by capsule networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 0–0
Wang Y, Gonzalez-Garcia A, Berga D, Herranz L, Khan FS, Weijer Jvd (2020b) Minegan: effective knowledge transfer from gans to target domains with few images. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9332–9341
Bojanowski P, Joulin A, Lopez-Pas D, Szlam A (2018) Optimizing the latent space of generative networks. In: International Conference on Machine Learning, PMLR, pp 600–609
Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, Aitken A, Tejani A, Totz J, Wang Z, et al. (2017) Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4681–4690
Isola P, Zhu JY, Zhou T, E
4602_CR18
4602_CR17
4602_CR19
4602_CR14
4602_CR58
4602_CR13
4602_CR57
4602_CR16
4602_CR15
4602_CR10
4602_CR54
4602_CR53
4602_CR12
4602_CR56
4602_CR11
4602_CR55
4602_CR50
T Karras (4602_CR23) 2020; 33
4602_CR52
4602_CR51
T Park (4602_CR40) 2020; 33
4602_CR47
4602_CR46
4602_CR49
4602_CR48
4602_CR43
4602_CR42
4602_CR44
4602_CR41
Y Li (4602_CR30) 2020; 33
M Kang (4602_CR20) 2020; 33
4602_CR1
4602_CR5
4602_CR4
4602_CR3
4602_CR2
4602_CR9
4602_CR8
4602_CR39
4602_CR7
X Wang (4602_CR45) 2008; 31
4602_CR6
4602_CR36
4602_CR35
4602_CR38
4602_CR37
4602_CR32
4602_CR31
4602_CR34
4602_CR33
4602_CR29
4602_CR28
4602_CR25
4602_CR24
4602_CR27
4602_CR26
4602_CR21
4602_CR22
References_xml – reference: Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: European conference on computer vision, Springer, pp 694–711
– reference: Yang M, Wang Z, Chi Z, Feng W (2022) Wavegan: Frequency-aware gan for high-fidelity few-shot image generation. In: Computer vision–ECCV 2022: 17th european conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XV, Springer, pp 1–17
– reference: Karras T, Laine S, Aila T (2019) A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 4401–4410
– reference: Wang Y, Wu C, Herranz L, van de Weijer J, Gonzalez-Garcia A, Raducanu B (2018b) Transferring gans: generating images from limited data. In: Proceedings of the european conference on computer vision (ECCV), pp 218–234
– reference: Liu R, Ge Y, Choi CL, Wang X, Li H (2021a) Divco: Diverse conditional image synthesis via contrastive generative adversarial network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 16377–16386
– reference: Ojha U, Li Y, Lu J, Efros AA, Lee YJ, Shechtman E, Zhang R (2021) Few-shot image generation via cross-domain correspondence. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 10743–10752
– reference: Wang H, Gui S, Yang H, Liu J, Wang Z (2020a) Gan slimming: All-in-one gan compression by a unified optimization framework. In: European conference on computer vision, Springer, pp 54–73
– reference: Gou Y, Li M, Song Y, He Y, Wang L (2022) Multi-feature contrastive learning for unpaired image-to-image translation. Complex & Intelligent Systems pp 1–12
– reference: Gu Z, Li W, Huo J, Wang L, Gao Y (2021) Lofgan: Fusing local representations for few-shot image generation. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 8463–8471
– reference: Li M, Lin J, Ding Y, Liu Z, Zhu JY, Han S (2020a) Gan compression: Efficient architectures for interactive conditional gans. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5284–5294
– reference: Karras T, Aittala M, Laine S, Härkönen E, Hellsten J, Lehtinen J, Aila T (2021) Alias-free generative adversarial networks. Advances in Neural Information Processing Systems 34
– reference: Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018) The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 586–595
– reference: Chong MJ, Forsyth D (2020) Effectively unbiased fid and inception score and where to find them. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6070–6079
– reference: KangMParkJContragan: Contrastive learning for conditional image generationAdvances in Neural Information Processing Systems2020332135721369
– reference: Liang J, Zeng H, Zhang L (2022) Details or artifacts: A locally discriminative learning approach to realistic image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5657–5666
– reference: Jeong J, Shin J (2021) Training gans with stronger augmentations via contrastive discriminator. In: International Conference on Learning Representations
– reference: He K, Fan H, Wu Y, Xie S, Girshick R (2020) Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9729–9738
– reference: Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J, Aila T (2020b) Analyzing and improving the image quality of stylegan. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8110–8119
– reference: Xiao J, Li L, Wang C, Zha ZJ, Huang Q (2022) Few shot generative model adaption via relaxed spatial structural alignment. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 11204–11213
– reference: Kong C, Kim J, Han D, Kwak N (2022) Few-shot image generation with mixup-based distance learning. In: Computer vision–ECCV 2022: 17th european conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XV, Springer, pp 563–580
– reference: Zhu JY, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223–2232
– reference: Liu Y, Shu Z, Li Y, Lin Z, Perazzi F, Kung SY (2021b) Content-aware gan compression. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12156–12166
– reference: Kingma DP, Ba J (2014) Adam: A method for stochastic optimization. arXiv preprint http://arxiv.org/abs/1412.6980
– reference: Cao J, Hou L, Yang MH, He R, Sun Z (2021) Remix: Towards image-to-image translation with limited data. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 15018–15027
– reference: Hong Y, Niu L, Zhang J, Zhao W, Fu C, Zhang L (2020) F2gan: Fusing-and-filling gan for few-shot image generation. In: Proceedings of the 28th ACM international conference on multimedia, pp 2535–2543
– reference: ParkTZhuJYWangOLuJShechtmanEEfrosAZhangRSwapping autoencoder for deep image manipulationAdvances in Neural Information Processing Systems20203371987211
– reference: Chen T, Kornblith S, Norouzi M, Hinton G (2020) A simple framework for contrastive learning of visual representations. In: International conference on machine learning, PMLR, pp 1597–1607
– reference: Hong Y, Niu L, Zhang J, Zhang L (2022) Deltagan: Towards diverse few-shot image generation with sample-specific delta. In: Computer Vision–ECCV 2022: 17th european conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XVI, Springer, pp 259–276
– reference: Wang X, Yu K, Wu S, Gu J, Liu Y, Dong C, Qiao Y, Change Loy C (2018a) Esrgan: Enhanced super-resolution generative adversarial networks. In: Proceedings of the european conference on computer vision (ECCV) workshops, pp 0–0
– reference: Zhang R, Isola P, Efros AA (2016) Colorful image colorization. In: European conference on computer vision, Springer, pp 649–666
– reference: LiYZhangRLuJCShechtmanEFew-shot image generation with elastic weight consolidationAdvances in Neural Information Processing Systems2020331588515896
– reference: Skorokhodov I, Sotnikov G, Elhoseiny M (2021) Aligning latent and image spaces to connect the unconnectable. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 14144–14153
– reference: Wang Y, Gonzalez-Garcia A, Berga D, Herranz L, Khan FS, Weijer Jvd (2020b) Minegan: effective knowledge transfer from gans to target domains with few images. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9332–9341
– reference: Tseng HY, Jiang L, Liu C, Yang MH, Yang W (2021) Regularizing generative adversarial networks under limited data. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7921–7931
– reference: Brock A, Donahue J, Simonyan K (2018) Large scale gan training for high fidelity natural image synthesis. arXiv preprint http://arxiv.org/abs/1809.11096
– reference: Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. Advances in neural information processing systems 27
– reference: KarrasTAittalaMHellstenJLaineSLehtinenJAilaTTraining generative adversarial networks with limited dataAdvances in Neural Information Processing Systems2020331210412114
– reference: Xie J, Zheng Z, Fang X, Zhu SC, Wu YN (2021) Learning cycle-consistent cooperative networks via alternating mcmc teaching for unsupervised cross-domain translation. In: The Thirty-Fifth AAAI conference on artificial intelligence (AAAI)
– reference: Mo S, Cho M, Shin J (2020) Freeze the discriminator: a simple baseline for fine-tuning gans. arXiv preprint http://arxiv.org/abs/2002.10964
– reference: Frühstück A, Singh KK, Shechtman E, Mitra NJ, Wonka P, Lu J (2022) Insetgan for full-body image generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7723–7732
– reference: Jung C, Kwon G, Ye JC (2022) Exploring patch-wise semantic relation for contrastive learning in image-to-image translation tasks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 18260–18269
– reference: Bojanowski P, Joulin A, Lopez-Pas D, Szlam A (2018) Optimizing the latent space of generative networks. In: International Conference on Machine Learning, PMLR, pp 600–609
– reference: Chan KC, Wang X, Xu X, Gu J, Loy CC (2021) Glean: Generative latent bank for large-factor image super-resolution. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 14245–14254
– reference: Ozbulak G (2019) Image colorization by capsule networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 0–0
– reference: Van den Oord A, Li Y, Vinyals O (2018) Representation learning with contrastive predictive coding. arXiv e-prints pp arXiv–1807
– reference: Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30
– reference: Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training gans. Advances in neural information processing systems 29
– reference: Karras T, Aila T, Laine S, Lehtinen J (2018) Progressive growing of gans for improved quality, stability, and variation. In: International Conference on Learning Representations
– reference: Wu Z, Xiong Y, Yu SX, Lin D (2018) Unsupervised feature learning via non-parametric instance discrimination. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3733–3742
– reference: Isola P, Zhu JY, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1125–1134
– reference: Yu F, Seff A, Zhang Y, Song S, Funkhouser T, Xiao J (2015) Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint http://arxiv.org/abs/1506.03365
– reference: Benaim S, Wolf L (2017) One-sided unsupervised domain mapping. Advances in neural information processing systems 30
– reference: Yaniv J, Newman Y, Shamir A (2019) The face of art: landmark detection and geometric style in portraits. ACM Transactions on graphics (TOG) 38(4):1–15
– reference: WangXTangXFace photo-sketch synthesis and recognitionIEEE transactions on pattern analysis and machine intelligence200831111955196710.1109/TPAMI.2008.222
– reference: Noguchi A, Harada T (2019) Image generation from small datasets via batch statistics adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 2750–2758
– reference: Wu Y, Wang X, Li Y, Zhang H, Zhao X, Shan Y (2021) Towards vivid and diverse image colorization with generative color prior. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 14377–14386
– reference: Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, Aitken A, Tejani A, Totz J, Wang Z, et al. (2017) Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4681–4690
– reference: Park T, Efros AA, Zhang R, Zhu JY (2020a) Contrastive learning for unpaired image-to-image translation. In: European conference on computer vision, Springer, pp 319–345
– ident: 4602_CR15
  doi: 10.1007/978-3-031-19787-1_15
– ident: 4602_CR54
  doi: 10.1145/3306346.3322984
– ident: 4602_CR11
  doi: 10.1109/ICCV48922.2021.00835
– ident: 4602_CR46
  doi: 10.1007/978-3-030-11021-5_5
– ident: 4602_CR34
– ident: 4602_CR1
– ident: 4602_CR39
  doi: 10.1007/978-3-030-58545-7_19
– ident: 4602_CR52
  doi: 10.1609/aaai.v35i12.17249
– volume: 33
  start-page: 21357
  year: 2020
  ident: 4602_CR20
  publication-title: Advances in Neural Information Processing Systems
– ident: 4602_CR56
  doi: 10.1007/978-3-319-46487-9_40
– ident: 4602_CR28
  doi: 10.1109/CVPR.2017.19
– ident: 4602_CR50
  doi: 10.1109/CVPR.2018.00393
– ident: 4602_CR25
– ident: 4602_CR21
– ident: 4602_CR7
  doi: 10.1109/CVPR42600.2020.00611
– ident: 4602_CR49
  doi: 10.1109/ICCV48922.2021.01411
– ident: 4602_CR41
– ident: 4602_CR36
  doi: 10.1109/CVPR46437.2021.01060
– ident: 4602_CR6
– ident: 4602_CR2
– ident: 4602_CR9
– ident: 4602_CR31
  doi: 10.1109/CVPR52688.2022.00557
– ident: 4602_CR55
– ident: 4602_CR42
  doi: 10.1109/ICCV48922.2021.01388
– ident: 4602_CR58
  doi: 10.1109/ICCV.2017.244
– volume: 33
  start-page: 7198
  year: 2020
  ident: 4602_CR40
  publication-title: Advances in Neural Information Processing Systems
– ident: 4602_CR53
  doi: 10.1007/978-3-031-19784-0_1
– ident: 4602_CR35
  doi: 10.1109/ICCV.2019.00284
– ident: 4602_CR17
– ident: 4602_CR38
  doi: 10.1109/CVPRW.2019.00268
– ident: 4602_CR48
  doi: 10.1109/CVPR42600.2020.00935
– ident: 4602_CR3
– ident: 4602_CR10
  doi: 10.1007/s40747-022-00924-1
– volume: 33
  start-page: 15885
  year: 2020
  ident: 4602_CR30
  publication-title: Advances in Neural Information Processing Systems
– ident: 4602_CR12
  doi: 10.1109/CVPR42600.2020.00975
– ident: 4602_CR18
  doi: 10.1007/978-3-319-46475-6_43
– ident: 4602_CR14
  doi: 10.1145/3394171.3413561
– ident: 4602_CR13
– ident: 4602_CR32
  doi: 10.1109/CVPR46437.2021.01611
– volume: 31
  start-page: 1955
  issue: 11
  year: 2008
  ident: 4602_CR45
  publication-title: IEEE transactions on pattern analysis and machine intelligence
  doi: 10.1109/TPAMI.2008.222
– ident: 4602_CR24
  doi: 10.1109/CVPR42600.2020.00813
– ident: 4602_CR57
  doi: 10.1109/CVPR.2018.00068
– ident: 4602_CR19
  doi: 10.1109/CVPR52688.2022.01772
– ident: 4602_CR22
  doi: 10.1109/CVPR.2019.00453
– ident: 4602_CR43
  doi: 10.1109/CVPR46437.2021.00783
– ident: 4602_CR51
  doi: 10.1109/CVPR52688.2022.01092
– ident: 4602_CR44
  doi: 10.1007/978-3-030-58548-8_4
– ident: 4602_CR33
  doi: 10.1109/CVPR46437.2021.01198
– ident: 4602_CR27
  doi: 10.1007/978-3-031-19784-0_33
– ident: 4602_CR5
  doi: 10.1109/CVPR46437.2021.01402
– ident: 4602_CR37
– volume: 33
  start-page: 12104
  year: 2020
  ident: 4602_CR23
  publication-title: Advances in Neural Information Processing Systems
– ident: 4602_CR29
  doi: 10.1109/CVPR42600.2020.00533
– ident: 4602_CR4
  doi: 10.1109/CVPR46437.2021.01477
– ident: 4602_CR26
– ident: 4602_CR47
  doi: 10.1007/978-3-030-01231-1_14
– ident: 4602_CR16
  doi: 10.1109/CVPR.2017.632
– ident: 4602_CR8
  doi: 10.1109/CVPR52688.2022.00757
SSID ssj0003301
Score 2.375476
Snippet Training well-performing Generative Adversarial Networks (GANs) with limited data has always been challenging. Existing methods either require sufficient data...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 22391
SubjectTerms Ablation
Artificial Intelligence
Computer Science
Generative adversarial networks
Image processing
Image quality
Machines
Manufacturing
Mechanical Engineering
Processes
Semantic relations
Semantics
Training
SummonAdditionalLinks – databaseName: Springer Nature Consortium list (Orbis Cascade Alliance)
  dbid: RSV
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8QwEA66evDi-sTVVXLwpoG0SdP2KOLqQUR8sbeSpqlbcLuyrfr3naTprooKeixJQ5jJZGbIzPchdJjFEdc85URLyHR4RFOSmmYfwUJw5pmgacNachleXUXDYXztmsKqttq9fZK0N_WHZjduynt8ZssRwY4X0VJg0GZMjn77MLt_IUO3PHmQWRAh4qFrlfl-jc_uaB5jfnkWtd5m0P3fPtfQqosu8UlzHNbRgi43ULdlbsDOkDfR-Y2uRw1rArYbI9lkLIsSV3oMoi4UnroiOQxBLc71G6lGkxoXY7h-8KOFqjajW-h-cHZ3ekEcpQJRYGs1YYJqz2MBBf2EKmQGLU9TTwZ5CJ8SosfY8ML6gZ8J0CLXMpfg5iIRqzQUNGPbqFNOSr2DMCRvTCpwZkxoroMsDmkGCs-l0oFPOe8hr5VsohzeuKG9eErmSMlGUglIKrGSSqIeOpr989ygbfw6u98qLHGWVyWQARqMNghceui4VdB8-OfVdv82fQ-tGOb5pq6vjzr19EXvo2X1WhfV9MCeyHfCkNeI
  priority: 102
  providerName: Springer Nature
Title Rethinking cross-domain semantic relation for few-shot image generation
URI https://link.springer.com/article/10.1007/s10489-023-04602-8
https://www.proquest.com/docview/2878550181
Volume 53
WOSCitedRecordID wos001021078700002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAVX
  databaseName: SpringerLINK Contemporary 1997-Present
  customDbUrl:
  eissn: 1573-7497
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0003301
  issn: 0924-669X
  databaseCode: RSV
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV07T8MwED7xGlgoT1EolQc2sHAbx04mBFULElBV5VVYosRxaSXaQhPg73NOHCqQYGE5yXJiWf7su7N9vg9gP_Y9rnnEqQ5xp8M9FtHIPPYRjkRjHgsW5awll7Ld9no9v2MP3BIbVlnoxExRxxNlzsiP0LM3ubfQIB2_vFLDGmVuVy2FxjwsoqF2DYOBvH_40sS4V88Y83CPQYXwe_bRjH06x02wUN3JghtRK3w3TDNv88cFaWZ3WqX_9ngVVqzHSU7yKbIGc3q8DqWCzYHYxb0BZ12dDnImBZJ1kcaTUTgck0SPcPiHikxt4BxBR5f09QdNBpOUDEeokshTlr7a1G7Cbat50zinlmaBKlx_KXUE07Wa4zLETCrpmAx6mtVCty-xGKJH6Ruu2LpbjwUiy3XYD9H0ecJXkRQsdrZgYTwZ620guKFzQoUGzhGaazf2JYtxEvRDpd0647wMtWKMA2VzkBsqjOdglj3Z4BIgLkGGS-CV4eDrn5c8A8efX1cKMAK7GpNghkQZDgs4Z9W_t7bzd2u7sGzY5_PYvgospNM3vQdL6j0dJtNqNhersHjabHe6WLqQFOUVaxgpr1F23EeU3eu7Tz9V6BQ
linkProvider ProQuest
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3JTsMwEB2xSXBhR5TVBziBhRs7TnJACLGrpUIIpN5C4ri0Em2hCSB-im9k7CRUIMGNA8fEiRXbz7PEM_MAtpLAF1rEguoIPR3hs5jGJtlHcg-VeSJZnLOW1L1Gw282g6sReC9zYUxYZSkTraBO-sr8I99Dy97U3kKFdPD4RA1rlDldLSk0cljU9Nsrumzp_sUxru-245ye3Byd04JVgCqEW0a5ZLpa5S7DT_SUx03BOM2qkdvy8DJCAyow1KiO6yQSByJ01IpQ0vsyULEnWcKx31EYF3jH7KiaRz8lP-eWbpmhT0OlDJpFkk6RqidMcJLDbTAlSqGvinBo3X47kLV67nTmv83QLEwXFjU5zLfAHIzo3jzMlGwVpBBeC3B2rbN2zhRB7JTQpN-NOj2S6i7Cq6PIoAgMJGjIk5Z-pWm7n5FOF0UuubfluU3rItz-yXCWYKzX7-llIOiw8kihAudSC-0mgccSBHkrUtp1mBAVqJZrGqqixrqh-ngIh9WhDQ5CxEFocRD6Fdj5fOcxrzDy69Nr5eKHhbRJw-HKV2C3hM-w-efeVn7vbRMmz28u62H9olFbhSnHoNfGMa7BWDZ41uswoV6yTjrYsPuAwN1fw-oDoKo6AA
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3NT9swFH8q7YS40I0PrVA2H8ZpWLix4yQHhGBQQKCqQkPqLSSOMyrRFppAtX9tfx3PidNqk8aNA8fIiRXbP78P-733A_iWBL7QIhZUR-jpCJ_FNDbJPpJ7qMwTyeKSteTK6_X8wSDo1-BPlQtjwiormVgI6mSizBn5Plr2pvYWKqT91IZF9E-6hw-P1DBImZvWik6jhMil_j1D9y07uDjBtd51nO7pzx_n1DIMUIXQyymXTHc63GX4u57yuCkep1knclMPHyM0pgJDk-q4TiJxUEJHaYRS35eBij3JEo79LkHD4-j01KFxfNrrX8_1AOcF-TJDD4dKGQxsyo5N3BMmVMnhRWglyqS_1eLC1v3nerbQet3me56vj7BqbW1yVG6OT1DT4zVoVjwWxIq1dTi71vldySFBiumhyWQUDcck0yME3lCRqQ0ZJGjik1TPaHY3yclwhMKY_CoKd5vWDbh5k-FsQn08GevPQNCV5ZFC1c6lFtpNAo8lCP80Utp1mBAt6FTrGypbfd2QgNyHi7rRBhMhYiIsMBH6Lfg-_-ahrD3y6tvtCgihlUNZuEBBC_YqKC2a_9_b1uu9fYVlRFN4ddG73IYVxwC5CHBsQz2fPukd-KCe82E2_WI3BYHbt8bVC_72RFI
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Rethinking+cross-domain+semantic+relation+for+few-shot+image+generation&rft.jtitle=Applied+intelligence+%28Dordrecht%2C+Netherlands%29&rft.au=Gou%2C+Yao&rft.au=Li%2C+Min&rft.au=Lv%2C+Yilong&rft.au=Zhang%2C+Yusen&rft.date=2023-10-01&rft.pub=Springer+Nature+B.V&rft.issn=0924-669X&rft.eissn=1573-7497&rft.volume=53&rft.issue=19&rft.spage=22391&rft.epage=22404&rft_id=info:doi/10.1007%2Fs10489-023-04602-8&rft.externalDBID=HAS_PDF_LINK
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0924-669X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0924-669X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0924-669X&client=summon