Semi-Identical Twins Variational AutoEncoder for Few-Shot Learning

Data augmentation is a popular way for few-shot learning (FSL). It generates more samples as supplements and then transforms the FSL task into a common supervised learning problem for a solution. However, most data-augmentation-based FSL approaches only consider the prior visual knowledge for featur...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems Vol. 35; no. 7; pp. 9455 - 9469
Main Authors: Zhang, Yi, Huang, Sheng, Peng, Xi, Yang, Dan
Format: Journal Article
Language:English
Published: United States IEEE 01.07.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:2162-237X, 2162-2388, 2162-2388
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Data augmentation is a popular way for few-shot learning (FSL). It generates more samples as supplements and then transforms the FSL task into a common supervised learning problem for a solution. However, most data-augmentation-based FSL approaches only consider the prior visual knowledge for feature generation, thereby leading to low diversity and poor quality of generated data. In this study, we attempt to address this issue by incorporating both prior visual and prior semantic knowledge to condition the feature generation process. Inspired by some genetic characteristics of semi-identical twins, a novel multimodal generative FSL approach was developed named semi-identical twins variational autoencoder (STVAE) to better exploit the complementarity of these modality information by considering the multimodal conditional feature generation process as a process that semi-identical twins are born and collaborate to simulate their father. STVAE conducts feature synthesis by pairing two conditional variational autoencoders (CVAEs) with the same seed but different modality conditions. Subsequently, the generated features of two CVAEs are considered as semi-identical twins and adaptively combined to yield the final feature, which is considered as their fake father. STVAE requires that the final feature can be converted back into its paired conditions while ensuring these conditions remain consistent with the original in both representation and function. Moreover, STVAE is able to work in the partial modality-absence case due to the adaptive linear feature combination strategy. STVAE essentially provides a novel idea to exploit the complementarity of different modality prior information inspired by genetics in FSL. Extensive experimental results demonstrate that our work achieves promising performances in comparison to the recent state-of-the-art approaches, as well as validate its effectiveness on FSL under various modality settings.
AbstractList Data augmentation is a popular way for few-shot learning (FSL). It generates more samples as supplements and then transforms the FSL task into a common supervised learning problem for a solution. However, most data-augmentation-based FSL approaches only consider the prior visual knowledge for feature generation, thereby leading to low diversity and poor quality of generated data. In this study, we attempt to address this issue by incorporating both prior visual and prior semantic knowledge to condition the feature generation process. Inspired by some genetic characteristics of semi-identical twins, a novel multimodal generative FSL approach was developed named semi-identical twins variational autoencoder (STVAE) to better exploit the complementarity of these modality information by considering the multimodal conditional feature generation process as a process that semi-identical twins are born and collaborate to simulate their father. STVAE conducts feature synthesis by pairing two conditional variational autoencoders (CVAEs) with the same seed but different modality conditions. Subsequently, the generated features of two CVAEs are considered as semi-identical twins and adaptively combined to yield the final feature, which is considered as their fake father. STVAE requires that the final feature can be converted back into its paired conditions while ensuring these conditions remain consistent with the original in both representation and function. Moreover, STVAE is able to work in the partial modality-absence case due to the adaptive linear feature combination strategy. STVAE essentially provides a novel idea to exploit the complementarity of different modality prior information inspired by genetics in FSL. Extensive experimental results demonstrate that our work achieves promising performances in comparison to the recent state-of-the-art approaches, as well as validate its effectiveness on FSL under various modality settings.
Data augmentation is a popular way for few-shot learning (FSL). It generates more samples as supplements and then transforms the FSL task into a common supervised learning problem for a solution. However, most data-augmentation-based FSL approaches only consider the prior visual knowledge for feature generation, thereby leading to low diversity and poor quality of generated data. In this study, we attempt to address this issue by incorporating both prior visual and prior semantic knowledge to condition the feature generation process. Inspired by some genetic characteristics of semi-identical twins, a novel multimodal generative FSL approach was developed named semi-identical twins variational autoencoder (STVAE) to better exploit the complementarity of these modality information by considering the multimodal conditional feature generation process as a process that semi-identical twins are born and collaborate to simulate their father. STVAE conducts feature synthesis by pairing two conditional variational autoencoders (CVAEs) with the same seed but different modality conditions. Subsequently, the generated features of two CVAEs are considered as semi-identical twins and adaptively combined to yield the final feature, which is considered as their fake father. STVAE requires that the final feature can be converted back into its paired conditions while ensuring these conditions remain consistent with the original in both representation and function. Moreover, STVAE is able to work in the partial modality-absence case due to the adaptive linear feature combination strategy. STVAE essentially provides a novel idea to exploit the complementarity of different modality prior information inspired by genetics in FSL. Extensive experimental results demonstrate that our work achieves promising performances in comparison to the recent state-of-the-art approaches, as well as validate its effectiveness on FSL under various modality settings.Data augmentation is a popular way for few-shot learning (FSL). It generates more samples as supplements and then transforms the FSL task into a common supervised learning problem for a solution. However, most data-augmentation-based FSL approaches only consider the prior visual knowledge for feature generation, thereby leading to low diversity and poor quality of generated data. In this study, we attempt to address this issue by incorporating both prior visual and prior semantic knowledge to condition the feature generation process. Inspired by some genetic characteristics of semi-identical twins, a novel multimodal generative FSL approach was developed named semi-identical twins variational autoencoder (STVAE) to better exploit the complementarity of these modality information by considering the multimodal conditional feature generation process as a process that semi-identical twins are born and collaborate to simulate their father. STVAE conducts feature synthesis by pairing two conditional variational autoencoders (CVAEs) with the same seed but different modality conditions. Subsequently, the generated features of two CVAEs are considered as semi-identical twins and adaptively combined to yield the final feature, which is considered as their fake father. STVAE requires that the final feature can be converted back into its paired conditions while ensuring these conditions remain consistent with the original in both representation and function. Moreover, STVAE is able to work in the partial modality-absence case due to the adaptive linear feature combination strategy. STVAE essentially provides a novel idea to exploit the complementarity of different modality prior information inspired by genetics in FSL. Extensive experimental results demonstrate that our work achieves promising performances in comparison to the recent state-of-the-art approaches, as well as validate its effectiveness on FSL under various modality settings.
Author Huang, Sheng
Zhang, Yi
Peng, Xi
Yang, Dan
Author_xml – sequence: 1
  givenname: Yi
  orcidid: 0000-0003-0843-7170
  surname: Zhang
  fullname: Zhang, Yi
  email: zhangyii@cqu.edu.cn
  organization: School of Big Data and Software Engineering, Chongqing University, Chongqing, China
– sequence: 2
  givenname: Sheng
  orcidid: 0000-0001-5610-0826
  surname: Huang
  fullname: Huang, Sheng
  email: huangsheng@cqu.edu.cn
  organization: School of Big Data and Software Engineering and the Ministry of Education Key Laboratory of Dependable Service Computing in Cyber Physical Society, Chongqing University, Chongqing, China
– sequence: 3
  givenname: Xi
  orcidid: 0000-0002-7772-001X
  surname: Peng
  fullname: Peng, Xi
  email: xipeng@udel.edu
  organization: Department of Computer and Information Sciences, University of Delaware, Newark, DE, USA
– sequence: 4
  givenname: Dan
  orcidid: 0000-0001-5640-7772
  surname: Yang
  fullname: Yang, Dan
  email: dyang@cqu.edu.cn
  organization: School of Big Data and Software Engineering, Chongqing University, Chongqing, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/37018571$$D View this record in MEDLINE/PubMed
BookMark eNp9kctKxDAUhoMo3l9ARApu3HRMctqmWap4g0EXM4q7kKanGukkmrSIb2_noogLs0kI3_fDOf8OWXfeISEHjI4Yo_J0enc3now45XwEHCDPYY1sc1bwlENZrv-8xdMW2Y_xlQ6noHmRyU2yBYKyMhdsm5xPcGbT2xpdZ41uk-mHdTF51MHqzno3_Jz1nb90xtcYksaH5Ao_0smL75Ix6uCse94jG41uI-6v7l3ycHU5vbhJx_fXtxdn49RAzrtUYiWhanSVAYMsFyYHbDJkIErQPENTGzQcBRqhuakLoDWYUpbUVFnDMwm75GSZ-xb8e4-xUzMbDbatduj7qLiQgmWCSjGgx3_QV9-HYZqogIqSynzIHaijFdVXM6zVW7AzHT7V93YGoFwCJvgYAzbK2G6xly5o2ypG1bwLtehCzbtQqy4Glf9Rv9P_lQ6XkkXEXwJlHFgBX7Upkzk
CODEN ITNNAL
CitedBy_id crossref_primary_10_1016_j_knosys_2025_114369
crossref_primary_10_1109_TNNLS_2024_3405938
crossref_primary_10_1016_j_neunet_2025_107761
crossref_primary_10_1109_TSMC_2024_3524390
crossref_primary_10_1109_JBHI_2025_3550347
Cites_doi 10.1109/TNNLS.2020.3027603
10.1109/WACV48630.2021.00269
10.1609/aaai.v33i01.33013379
10.1109/TNNLS.2019.2957187
10.5555/3294996.3295163
10.3390/e23111390
10.1109/TNNLS.2021.3082928
10.1109/CVPR.2018.00131
10.1109/TNNLS.2021.3083650
10.1109/CVPR.2019.00844
10.1109/CVPR46437.2021.00514
10.1007/978-3-030-58568-6_16
10.1109/TPAMI.2018.2857768
10.1109/CVPR.2018.00450
10.1109/CVPR.2019.00888
10.1109/ICCV48922.2021.00857
10.1109/CVPR.2019.00288
10.1109/WACV51458.2022.00211
10.1109/TCSVT.2021.3058098
10.1109/CVPR42600.2020.00419
10.1016/j.patcog.2021.107935
10.1109/TPAMI.2021.3102098
10.1007/s11263-015-0816-y
10.1109/WACV48630.2021.00401
10.1109/CVPR.2014.122
10.1609/aaai.v34i07.6628
10.1609/aaai.v35i9.16957
10.1109/CVPR.2018.00610
10.1109/TIP.2019.2910052
10.1109/CVPR46437.2021.00239
10.1148/radiol.2021204433
10.1056/NEJMoa1701313
10.1109/TNNLS.2021.3084733
10.3115/v1/D14-1162
10.1109/TNNLS.2020.3011526
10.1109/CVPR.2018.00760
10.1109/CVPR46437.2021.00931
10.1109/TNNLS.2020.3006322
10.1109/tnnls.2022.3142181
10.1016/j.patrec.2022.06.012
10.1109/CVPR42600.2020.01348
10.1109/CVPR.2019.01052
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
DOI 10.1109/TNNLS.2022.3233553
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Aluminium Industry Abstracts
Biotechnology Research Abstracts
Calcium & Calcified Tissue Abstracts
Ceramic Abstracts
Chemoreception Abstracts
Computer and Information Systems Abstracts
Corrosion Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
Materials Business File
Mechanical & Transportation Engineering Abstracts
Neurosciences Abstracts
Solid State and Superconductivity Abstracts
METADEX
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
Aerospace Database
Materials Research Database
ProQuest Computer Science Collection
Civil Engineering Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Materials Research Database
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Materials Business File
Aerospace Database
Engineered Materials Abstracts
Biotechnology Research Abstracts
Chemoreception Abstracts
Advanced Technologies Database with Aerospace
ANTE: Abstracts in New Technology & Engineering
Civil Engineering Abstracts
Aluminium Industry Abstracts
Electronics & Communications Abstracts
Ceramic Abstracts
Neurosciences Abstracts
METADEX
Biotechnology and BioEngineering Abstracts
Computer and Information Systems Abstracts Professional
Solid State and Superconductivity Abstracts
Engineering Research Database
Calcium & Calcified Tissue Abstracts
Corrosion Abstracts
MEDLINE - Academic
DatabaseTitleList PubMed

Materials Research Database
MEDLINE - Academic
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2162-2388
EndPage 9469
ExternalDocumentID 37018571
10_1109_TNNLS_2022_3233553
10012316
Genre orig-research
Journal Article
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 62176030
  funderid: 10.13039/501100001809
– fundername: Natural Science Foundation of Chongqing
  grantid: cstc2021jcyj-msxmX0568
  funderid: 10.13039/501100005230
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
IFIPE
IPLJI
JAVBF
M43
MS~
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
NPM
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
ID FETCH-LOGICAL-c352t-9eb93bfab4313457c53ef4e13783a24ecdcec2e7ec7a2cd630d3c8980cb4f2493
IEDL.DBID RIE
ISICitedReferencesCount 10
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000926582000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2162-237X
2162-2388
IngestDate Thu Oct 02 11:49:33 EDT 2025
Mon Jun 30 06:36:32 EDT 2025
Thu Jul 24 03:25:39 EDT 2025
Tue Nov 18 21:45:11 EST 2025
Sat Nov 29 01:40:24 EST 2025
Wed Aug 27 02:05:16 EDT 2025
IsPeerReviewed false
IsScholarly true
Issue 7
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c352t-9eb93bfab4313457c53ef4e13783a24ecdcec2e7ec7a2cd630d3c8980cb4f2493
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0003-0843-7170
0000-0001-5640-7772
0000-0002-7772-001X
0000-0001-5610-0826
PMID 37018571
PQID 3078095980
PQPubID 85436
PageCount 15
ParticipantIDs pubmed_primary_37018571
crossref_primary_10_1109_TNNLS_2022_3233553
proquest_miscellaneous_2797147097
crossref_citationtrail_10_1109_TNNLS_2022_3233553
ieee_primary_10012316
proquest_journals_3078095980
PublicationCentury 2000
PublicationDate 2024-07-01
PublicationDateYYYYMMDD 2024-07-01
PublicationDate_xml – month: 07
  year: 2024
  text: 2024-07-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: Piscataway
PublicationTitle IEEE transaction on neural networks and learning systems
PublicationTitleAbbrev TNNLS
PublicationTitleAlternate IEEE Trans Neural Netw Learn Syst
PublicationYear 2024
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref12
Whitfield (ref26) 2007
ref15
ref14
ref58
Van der Maaten (ref59) 2008; 9
ref11
ref55
ref54
ref17
Makhzani (ref39)
ref16
ref19
ref18
Oh (ref37)
Wang (ref43); 34
Xu (ref53)
Oreshkin (ref30)
Krizhevsky (ref57) 2009
ref51
Yoon (ref38)
ref46
ref45
ref48
ref42
ref41
Wang (ref50)
ref44
ref49
ref8
ref7
ref9
ref4
ref3
ref6
ref5
ref40
Lazarou (ref47)
ref34
Finn (ref35)
ref36
ref31
ref33
ref32
ref2
Laenen (ref52); 34
ref1
ref24
ref25
ref20
Wah (ref56) 2011
ref22
ref21
Schwartz (ref10)
ref27
ref29
Vinyals (ref28)
Xing (ref23)
References_xml – start-page: 7115
  volume-title: Proc. 36th Int. Conf. Mach. Learn.
  ident: ref38
  article-title: TapNet: Neural network augmented with task-adaptive projection for few-shot learning
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Represent. Workshop
  ident: ref47
  article-title: Few-shot learning via tensor hallucination
– ident: ref3
  doi: 10.1109/TNNLS.2020.3027603
– ident: ref25
  doi: 10.1109/WACV48630.2021.00269
– ident: ref8
  doi: 10.1609/aaai.v33i01.33013379
– start-page: 10991
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref50
  article-title: Bridging multi-task learning and meta-learning: Towards efficient training and effective adaptation
– ident: ref49
  doi: 10.1109/TNNLS.2019.2957187
– ident: ref32
  doi: 10.5555/3294996.3295163
– ident: ref17
  doi: 10.3390/e23111390
– start-page: 3630
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref28
  article-title: Matching networks for one shot learning
– start-page: 4847
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref23
  article-title: Adaptive cross-modal few-shot learning
– ident: ref5
  doi: 10.1109/TNNLS.2021.3082928
– ident: ref31
  doi: 10.1109/CVPR.2018.00131
– ident: ref6
  doi: 10.1109/TNNLS.2021.3083650
– volume: 9
  start-page: 2579
  issue: 11
  year: 2008
  ident: ref59
  article-title: Visualizing data using t-SNE
  publication-title: J. Mach. Learn. Res.
– ident: ref15
  doi: 10.1109/CVPR.2019.00844
– year: 2007
  ident: ref26
  publication-title: ‘Semi-Identical’ Twins Discovered
– ident: ref34
  doi: 10.1109/CVPR46437.2021.00514
– volume-title: The Caltech-UCSD birds-200–2011 dataset
  year: 2011
  ident: ref56
– ident: ref54
  doi: 10.1007/978-3-030-58568-6_16
– ident: ref19
  doi: 10.1109/TPAMI.2018.2857768
– ident: ref40
  doi: 10.1109/CVPR.2018.00450
– ident: ref7
  doi: 10.1109/CVPR.2019.00888
– ident: ref45
  doi: 10.1109/ICCV48922.2021.00857
– ident: ref14
  doi: 10.1109/CVPR.2019.00288
– ident: ref18
  doi: 10.1109/WACV51458.2022.00211
– ident: ref41
  doi: 10.1109/TCSVT.2021.3058098
– start-page: 721
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref30
  article-title: TADAM: Task dependent adaptive metric for improved few-shot learning
– ident: ref36
  doi: 10.1109/CVPR42600.2020.00419
– ident: ref44
  doi: 10.1016/j.patcog.2021.107935
– ident: ref33
  doi: 10.1109/TPAMI.2021.3102098
– start-page: 1
  volume-title: Int. Conf. Learn. Represent.
  ident: ref53
  article-title: Attentional constellation nets for few-shot learning
– ident: ref55
  doi: 10.1007/s11263-015-0816-y
– ident: ref16
  doi: 10.1109/WACV48630.2021.00401
– ident: ref1
  doi: 10.1109/CVPR.2014.122
– ident: ref13
  doi: 10.1609/aaai.v34i07.6628
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Represent.
  ident: ref39
  article-title: Adversarial autoencoders
– start-page: 1126
  volume-title: Proc. 34th Int. Conf. Mach. Learn.
  ident: ref35
  article-title: Model-agnostic meta-learning for fast adaptation of deep networks
– ident: ref46
  doi: 10.1609/aaai.v35i9.16957
– volume: 34
  start-page: 1
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref43
  article-title: The role of global labels in few-shot classification and how to infer them
– ident: ref29
  doi: 10.1109/CVPR.2018.00610
– ident: ref11
  doi: 10.1109/TIP.2019.2910052
– year: 2009
  ident: ref57
  article-title: Learning multiple layers of features from tiny images
– ident: ref2
  doi: 10.1109/CVPR46437.2021.00239
– ident: ref4
  doi: 10.1148/radiol.2021204433
– ident: ref27
  doi: 10.1056/NEJMoa1701313
– ident: ref48
  doi: 10.1109/TNNLS.2021.3084733
– ident: ref58
  doi: 10.3115/v1/D14-1162
– ident: ref51
  doi: 10.1109/TNNLS.2020.3011526
– ident: ref9
  doi: 10.1109/CVPR.2018.00760
– ident: ref42
  doi: 10.1109/CVPR46437.2021.00931
– ident: ref21
  doi: 10.1109/TNNLS.2020.3006322
– ident: ref22
  doi: 10.1109/tnnls.2022.3142181
– ident: ref24
  doi: 10.1016/j.patrec.2022.06.012
– start-page: 2845
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref10
  article-title: Delta-encoder: An effective sample synthesis method for few-shot object recognition
– ident: ref12
  doi: 10.1109/CVPR42600.2020.01348
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Represent.
  ident: ref37
  article-title: BOIL: Towards representation change for few-shot learning
– volume: 34
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref52
  article-title: On episodes, prototypical networks, and few-shot learning
– ident: ref20
  doi: 10.1109/CVPR.2019.01052
SSID ssj0000605649
Score 2.4975274
Snippet Data augmentation is a popular way for few-shot learning (FSL). It generates more samples as supplements and then transforms the FSL task into a common...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 9455
SubjectTerms Adaptation models
Complementarity
Conditional variational auto-encoder (CVAE)
Data augmentation
feature generation
few-shot learning (FSL)
Genetics
Information processing
Learning
Machine learning
modality absence
Semantics
Supervised learning
Task analysis
Training
Twins
Visualization
Title Semi-Identical Twins Variational AutoEncoder for Few-Shot Learning
URI https://ieeexplore.ieee.org/document/10012316
https://www.ncbi.nlm.nih.gov/pubmed/37018571
https://www.proquest.com/docview/3078095980
https://www.proquest.com/docview/2797147097
Volume 35
WOSCitedRecordID wos000926582000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 2162-2388
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000605649
  issn: 2162-237X
  databaseCode: RIE
  dateStart: 20120101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB7RigMXyqNA2lIZiRtyGz-6to8FdcWhWiHtgvYWOfYsrAQJ2s3Sv8_YSVZwKBK3SHFixzMTfzP2fAPwNoogglSRa6081zEobr03nERNUC4kzqJM4nprZjO7XLpPQ7J6zoVBxHz4DC_SZd7Lj23YpVDZpcgIQEwO4MCYSZ-stQ-olATMJxnuSuqAS2WWY5JM6S4Xs9ntnNxBKS-UVLTIpgI6ypSJCkn8tSblIiv348287kyP_nPET-DxADDZda8RT-EBNs_gaCzewAZbfg7v5_hjzfs8XZITW9ytmy37Qq7zEB5k17uuvWlSzvuGEbRlU7zj829txwZO1q_H8Hl6s_jwkQ8FFXggnNVxh7VT9crXhBqUvjLhSuFKo1DGKi81hhgwSDQYjJchTlQZVbDOlqHWK_LT1As4bNoGXwGTBhOTHH2a9zpQo-hQyKBsHVfWelmAGKe0CgPbeCp68b3KXkfpqiyRKkmkGiRSwLv9Mz97ro1_tj5O8_1Hy36qCzgbRVcN9rit6E9mU8TTlgW82d8mS0rbI77BdretpHFGaFM6U8DLXuT7l4-acnJPp6fwiMam-3O8Z3DYbXb4Gh6GX916uzkndV3a86yuvwEqUOHI
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3Pb9MwFH6CgQQXxo8BGQOMxA15c2y3do5jWjVEiZBaUG-W47yOSpCgNmX__mzHqdhhSNwixYkdf3b8vWe_7wG8r3OXOy5qKqWwVNZOUG2toh5qT-Vc0CyKIq5TVZZ6sSi-pmD1GAuDiPHwGR6Hy7iXX7duG1xlJ3lkAPn4LtwbSclZH661c6kwT83HkfByXwXlQi2GMBlWnMzLcjrzBiHnx4ILv8yGFDpCsSCGlN9YlWKaldsZZ1x5Jvv_2ebH8ChRTHLaj4kncAebp7A_pG8gaTY_g48z_LWifaSuR4rMr1bNhnz3xnNyEJLTbdeeNyHqfU08uSUTvKKzH21Hkirr5QF8m5zPzy5oSqlAnWdaHS2wKkS1tJXnDUKOlBsJXErMhdLCcomudug4KnTKclePBauF04VmrpJLb6mJ57DXtA2-BMIVBi05_2nWSucL1QXm3Ald1UutLc8gH7rUuKQ3HtJe_DTR7mCFiYiYgIhJiGTwYffM715t45-lD0J__1Wy7-oMjgboTJqRG-P_ZTr4PDXL4N3utp9LYYPENthuN4arQuVSsUJl8KKHfPfyYaQc3lLpW3hwMf8yNdNP5edX8NC3U_aneo9gr1tv8TXcd3-61Wb9Jg7aa78g5Cc
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Semi-Identical+Twins+Variational+AutoEncoder+for+Few-Shot+Learning&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Zhang%2C+Yi&rft.au=Huang%2C+Sheng&rft.au=Peng%2C+Xi&rft.au=Yang%2C+Dan&rft.date=2024-07-01&rft.pub=IEEE&rft.issn=2162-237X&rft.volume=35&rft.issue=7&rft.spage=9455&rft.epage=9469&rft_id=info:doi/10.1109%2FTNNLS.2022.3233553&rft_id=info%3Apmid%2F37018571&rft.externalDocID=10012316
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon