Context-Aware Learning for Generative Models

This work studies the class of algorithms for learning with side-information that emerges by extending generative models with embedded context-related variables. Using finite mixture models (FMMs) as the prototypical Bayesian network, we show that maximum-likelihood estimation (MLE) of parameters th...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems Vol. 32; no. 8; pp. 3471 - 3483
Main Authors: Perdikis, Serafeim, Leeb, Robert, Chavarriaga, Ricardo, Millan, Jose del R.
Format: Journal Article
Language:English
Published: Piscataway IEEE 01.08.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:2162-237X, 2162-2388, 2162-2388
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract This work studies the class of algorithms for learning with side-information that emerges by extending generative models with embedded context-related variables. Using finite mixture models (FMMs) as the prototypical Bayesian network, we show that maximum-likelihood estimation (MLE) of parameters through expectation-maximization (EM) improves over the regular unsupervised case and can approach the performances of supervised learning, despite the absence of any explicit ground-truth data labeling. By direct application of the missing information principle (MIP), the algorithms' performances are proven to range between the conventional supervised and unsupervised MLE extremities proportionally to the information content of the contextual assistance provided. The acquired benefits regard higher estimation precision, smaller standard errors, faster convergence rates, and improved classification accuracy or regression fitness shown in various scenarios while also highlighting important properties and differences among the outlined situations. Applicability is showcased with three real-world unsupervised classification scenarios employing Gaussian mixture models. Importantly, we exemplify the natural extension of this methodology to any type of generative model by deriving an equivalent context-aware algorithm for variational autoencoders (VAs), thus broadening the spectrum of applicability to unsupervised deep learning with artificial neural networks. The latter is contrasted with a neural-symbolic algorithm exploiting side information.
AbstractList This work studies the class of algorithms for learning with side-information that emerges by extending generative models with embedded context-related variables. Using finite mixture models (FMMs) as the prototypical Bayesian network, we show that maximum-likelihood estimation (MLE) of parameters through expectation-maximization (EM) improves over the regular unsupervised case and can approach the performances of supervised learning, despite the absence of any explicit ground-truth data labeling. By direct application of the missing information principle (MIP), the algorithms' performances are proven to range between the conventional supervised and unsupervised MLE extremities proportionally to the information content of the contextual assistance provided. The acquired benefits regard higher estimation precision, smaller standard errors, faster convergence rates, and improved classification accuracy or regression fitness shown in various scenarios while also highlighting important properties and differences among the outlined situations. Applicability is showcased with three real-world unsupervised classification scenarios employing Gaussian mixture models. Importantly, we exemplify the natural extension of this methodology to any type of generative model by deriving an equivalent context-aware algorithm for variational autoencoders (VAs), thus broadening the spectrum of applicability to unsupervised deep learning with artificial neural networks. The latter is contrasted with a neural-symbolic algorithm exploiting side information.
This work studies the class of algorithms for learning with side-information that emerges by extending generative models with embedded context-related variables. Using finite mixture models (FMMs) as the prototypical Bayesian network, we show that maximum-likelihood estimation (MLE) of parameters through expectation-maximization (EM) improves over the regular unsupervised case and can approach the performances of supervised learning, despite the absence of any explicit ground-truth data labeling. By direct application of the missing information principle (MIP), the algorithms' performances are proven to range between the conventional supervised and unsupervised MLE extremities proportionally to the information content of the contextual assistance provided. The acquired benefits regard higher estimation precision, smaller standard errors, faster convergence rates, and improved classification accuracy or regression fitness shown in various scenarios while also highlighting important properties and differences among the outlined situations. Applicability is showcased with three real-world unsupervised classification scenarios employing Gaussian mixture models. Importantly, we exemplify the natural extension of this methodology to any type of generative model by deriving an equivalent context-aware algorithm for variational autoencoders (VAs), thus broadening the spectrum of applicability to unsupervised deep learning with artificial neural networks. The latter is contrasted with a neural-symbolic algorithm exploiting side information.This work studies the class of algorithms for learning with side-information that emerges by extending generative models with embedded context-related variables. Using finite mixture models (FMMs) as the prototypical Bayesian network, we show that maximum-likelihood estimation (MLE) of parameters through expectation-maximization (EM) improves over the regular unsupervised case and can approach the performances of supervised learning, despite the absence of any explicit ground-truth data labeling. By direct application of the missing information principle (MIP), the algorithms' performances are proven to range between the conventional supervised and unsupervised MLE extremities proportionally to the information content of the contextual assistance provided. The acquired benefits regard higher estimation precision, smaller standard errors, faster convergence rates, and improved classification accuracy or regression fitness shown in various scenarios while also highlighting important properties and differences among the outlined situations. Applicability is showcased with three real-world unsupervised classification scenarios employing Gaussian mixture models. Importantly, we exemplify the natural extension of this methodology to any type of generative model by deriving an equivalent context-aware algorithm for variational autoencoders (VAs), thus broadening the spectrum of applicability to unsupervised deep learning with artificial neural networks. The latter is contrasted with a neural-symbolic algorithm exploiting side information.
Author Leeb, Robert
Millan, Jose del R.
Chavarriaga, Ricardo
Perdikis, Serafeim
Author_xml – sequence: 1
  givenname: Serafeim
  orcidid: 0000-0003-2033-2486
  surname: Perdikis
  fullname: Perdikis, Serafeim
  email: serafeim.perdikis@essex.ac.uk
  organization: Center for Neuroprosthetics, Chair in Brain-Machine Interface, School of Engineering, Institute of Bioengineering, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
– sequence: 2
  givenname: Robert
  surname: Leeb
  fullname: Leeb, Robert
  organization: Center for Neuroprosthetics, Chair in Brain-Machine Interface, School of Engineering, Institute of Bioengineering, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
– sequence: 3
  givenname: Ricardo
  orcidid: 0000-0002-8879-2860
  surname: Chavarriaga
  fullname: Chavarriaga, Ricardo
  organization: Center for Neuroprosthetics, Chair in Brain-Machine Interface, School of Engineering, Institute of Bioengineering, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
– sequence: 4
  givenname: Jose del R.
  orcidid: 0000-0001-5819-1522
  surname: Millan
  fullname: Millan, Jose del R.
  organization: Center for Neuroprosthetics, Chair in Brain-Machine Interface, School of Engineering, Institute of Bioengineering, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
BookMark eNp9kMtOAjEUhhujEUReQDckblw42Mv0tiRE0WTEhZi4a8rMqRkydLAdvLy9gxAWLOzmdPF_5_KdoWNfe0DoguAhIVjfzqbT7GVIMcVDhgkRkhyhLiWCJpQpdbz_y7cO6se4wO0TmItUn6IOo1IKpWgX3Yxr38B3k4y-bIBBBjb40r8PXB0GE_AQbFN-wuCpLqCK5-jE2SpCf1d76PX-bjZ-SLLnyeN4lCU5k7RJnCuAEg2cWKG4zLFQWkoLWBVzSlMi81QyTdw8d04LwnJZaG6pEyoFUUjNeuh623cV6o81xMYsy5hDVVkP9ToamjKqRHvfJnp1EF3U6-Db7QzlXGqpeEraFN2m8lDHGMCZVSiXNvwYgs1Gp_nTaTY6zU5nC6kDKC-bVkcrLNiy-h-93KIlAOxnaSIY4Zz9Am90gG0
CODEN ITNNAL
CitedBy_id crossref_primary_10_1038_s42003_021_02938_w
crossref_primary_10_1093_pnasnexus_pgae076
crossref_primary_10_1109_TNSRE_2024_3456591
crossref_primary_10_1080_2326263X_2021_2009654
Cites_doi 10.1016/0024-3795(94)90363-8
10.1109/TNN.1998.712192
10.1145/1390334.1390436
10.1007/978-3-642-23808-6_36
10.1145/279943.279962
10.1145/1553374.1553457
10.1016/j.patcog.2009.03.027
10.1007/978-1-4471-0211-3
10.1109/CVPR.2013.52
10.1111/j.2517-6161.1977.tb01600.x
10.1109/ACCESS.2020.2977671
10.1137/1.9781611972818.52
10.1016/j.patcog.2008.07.014
10.1145/354756.354805
10.1109/IJCNN.2009.5178922
10.1109/TPAMI.2012.139
10.1089/cmb.2010.0034
10.3115/v1/P14-1031
10.1088/1741-2560/13/3/036018
10.1016/S0024-3795(99)00013-0
10.1109/ICCV.2019.00654
10.1088/1741-2560/11/3/036003
10.1145/1273496.1273571
10.7551/mitpress/9780262033589.001.0001
10.1109/ICDM.2011.84
10.1111/j.0006-341X.2004.00156.x
10.1007/11551188_45
10.18653/v1/P16-1228
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
DBID 97E
RIA
RIE
AAYXX
CITATION
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
DOI 10.1109/TNNLS.2020.3011671
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Xplore Digital Libary (IEL)
CrossRef
Aluminium Industry Abstracts
Biotechnology Research Abstracts
Calcium & Calcified Tissue Abstracts
Ceramic Abstracts
Chemoreception Abstracts
Computer and Information Systems Abstracts
Corrosion Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
Materials Business File
Mechanical & Transportation Engineering Abstracts
Neurosciences Abstracts
Solid State and Superconductivity Abstracts
METADEX
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
Aerospace Database
Materials Research Database
ProQuest Computer Science Collection
Civil Engineering Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
DatabaseTitle CrossRef
Materials Research Database
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Materials Business File
Aerospace Database
Engineered Materials Abstracts
Biotechnology Research Abstracts
Chemoreception Abstracts
Advanced Technologies Database with Aerospace
ANTE: Abstracts in New Technology & Engineering
Civil Engineering Abstracts
Aluminium Industry Abstracts
Electronics & Communications Abstracts
Ceramic Abstracts
Neurosciences Abstracts
METADEX
Biotechnology and BioEngineering Abstracts
Computer and Information Systems Abstracts Professional
Solid State and Superconductivity Abstracts
Engineering Research Database
Calcium & Calcified Tissue Abstracts
Corrosion Abstracts
MEDLINE - Academic
DatabaseTitleList
Materials Research Database
MEDLINE - Academic
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Xplore Digital Libary (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2162-2388
EndPage 3483
ExternalDocumentID 10_1109_TNNLS_2020_3011671
9163155
Genre orig-research
GrantInformation_xml – fundername: European ICT Programme
  grantid: FP7-224631
  funderid: 10.13039/100011273
– fundername: Tools for Brain-Computer Interaction (TOBI)
– fundername: Hasler Foundation, Switzerland
  funderid: 10.13039/501100003475
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
IFIPE
IPLJI
JAVBF
M43
MS~
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
ID FETCH-LOGICAL-c372t-ffde219e51a6857c068977ae08db22417c47391fbcff9613c7d95a2f684e6d793
IEDL.DBID RIE
ISICitedReferencesCount 5
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000681169500020&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2162-237X
2162-2388
IngestDate Sun Nov 09 14:19:26 EST 2025
Mon Jun 30 04:07:48 EDT 2025
Sat Nov 29 01:40:07 EST 2025
Tue Nov 18 22:13:15 EST 2025
Wed Aug 27 02:39:34 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed false
IsScholarly true
Issue 8
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c372t-ffde219e51a6857c068977ae08db22417c47391fbcff9613c7d95a2f684e6d793
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0001-5819-1522
0000-0002-8879-2860
0000-0003-2033-2486
OpenAccessLink http://infoscience.epfl.ch/record/288122
PMID 32776882
PQID 2557978541
PQPubID 85436
PageCount 13
ParticipantIDs proquest_journals_2557978541
crossref_primary_10_1109_TNNLS_2020_3011671
crossref_citationtrail_10_1109_TNNLS_2020_3011671
ieee_primary_9163155
proquest_miscellaneous_2432861629
PublicationCentury 2000
PublicationDate 2021-08-01
PublicationDateYYYYMMDD 2021-08-01
PublicationDate_xml – month: 08
  year: 2021
  text: 2021-08-01
  day: 01
PublicationDecade 2020
PublicationPlace Piscataway
PublicationPlace_xml – name: Piscataway
PublicationTitle IEEE transaction on neural networks and learning systems
PublicationTitleAbbrev TNNLS
PublicationYear 2021
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref12
cour (ref8) 2011; 12
ref53
dempster (ref45) 1977; 39
ref11
ref54
mclachlan (ref46) 2008
zhu (ref36) 2014; 15
ref17
kingma (ref4) 2014
shental (ref28) 2004
ref16
mccallum (ref38) 1999
basu (ref27) 2002
urner (ref24) 2012; 22
bishop (ref3) 2006
ref51
ref50
kingma (ref52) 2014
ambroise (ref14) 2000
lehmann (ref44) 1998
joulin (ref19) 2012
ref48
ref47
ref42
ref41
karaletsos (ref43) 2015
ref9
ref40
orchard (ref7) 1972; 1
muslea (ref15) 2002
ref35
ref37
ref31
ref30
chang (ref29) 2007
ref32
bryan (ref34) 2013; 28
ref2
vahdat (ref49) 2018
ref1
kindermans (ref39) 2012
luo (ref18) 2010
ganchev (ref6) 2010; 11
ref23
ref26
ref25
ref20
mann (ref5) 2010; 11
ref22
bellare (ref33) 2009
ref21
liu (ref13) 2012
sun (ref10) 2010
References_xml – ident: ref48
  doi: 10.1016/0024-3795(94)90363-8
– start-page: 43
  year: 2009
  ident: ref33
  article-title: Alternating projections for learning with expectation constraints
  publication-title: Proc 25th Conf Uncertainty Artif Intell (UAI)
– volume: 11
  start-page: 2001
  year: 2010
  ident: ref6
  article-title: Posterior regularization for structured latent variable models
  publication-title: J Mach Learn Res
– volume: 22
  start-page: 1252
  year: 2012
  ident: ref24
  article-title: Learning from weak teachers
  publication-title: J Mach Learn Res Proc Track
– year: 1998
  ident: ref44
  article-title: Theory point estimation
  publication-title: Springer Texts in Statistics
– ident: ref2
  doi: 10.1109/TNN.1998.712192
– start-page: 1864
  year: 2018
  ident: ref49
  article-title: Dvae#: Discrete variational autoencoders with relaxed Boltzmann priors
  publication-title: Proc Adv Neural Inf Process Syst
– start-page: 9
  year: 2012
  ident: ref39
  article-title: A P300 BCI for the masses: Prior information enables instant unsupervised spelling
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref32
  doi: 10.1145/1390334.1390436
– ident: ref11
  doi: 10.1007/978-3-642-23808-6_36
– ident: ref25
  doi: 10.1145/279943.279962
– year: 2015
  ident: ref43
  article-title: Bayesian representation learning with oracle constraints
  publication-title: arXiv 1506 05011
– ident: ref30
  doi: 10.1145/1553374.1553457
– volume: 28
  start-page: 208
  year: 2013
  ident: ref34
  article-title: An efficient posterior regularized latent variable model for interactive sound source separation
  publication-title: Proc 30th Int Conf Mach Learn (ICML)
– start-page: 3581
  year: 2014
  ident: ref52
  article-title: Semi-supervised learning with deep generative models
  publication-title: Proc Adv Neural Inf Process Syst
– start-page: 465
  year: 2004
  ident: ref28
  article-title: Computing Gaussian mixture models with EM using equivalence constraints
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref21
  doi: 10.1016/j.patcog.2009.03.027
– ident: ref41
  doi: 10.1007/978-1-4471-0211-3
– volume: 15
  start-page: 1799
  year: 2014
  ident: ref36
  article-title: Bayesian inference with posterior regularization and applications to infinite latent SVMs
  publication-title: J Mach Learn Res
– ident: ref9
  doi: 10.1109/CVPR.2013.52
– start-page: 593
  year: 2010
  ident: ref10
  article-title: Multi-label learning with weak label
  publication-title: Proc 24th AAAI Conf Artif Intell
– volume: 39
  start-page: 1
  year: 1977
  ident: ref45
  article-title: Maximum likelihood from incomplete data via the EM algorithm
  publication-title: J Roy Statist Soc B Statist Methodol
  doi: 10.1111/j.2517-6161.1977.tb01600.x
– ident: ref51
  doi: 10.1109/ACCESS.2020.2977671
– ident: ref17
  doi: 10.1137/1.9781611972818.52
– start-page: 1279
  year: 2012
  ident: ref19
  article-title: A convex relaxation for weakly supervised classifiers
  publication-title: Proc 29th Int Conf Mach Learn (ICML)
– year: 2006
  ident: ref3
  publication-title: Pattern Recognition and Machine Learning
– ident: ref40
  doi: 10.1016/j.patcog.2008.07.014
– start-page: 27
  year: 2002
  ident: ref27
  article-title: Semi-supervised clustering by seeding
  publication-title: Proc 19th Int'l Conf Machine Learning (ICML)
– ident: ref26
  doi: 10.1145/354756.354805
– ident: ref37
  doi: 10.1109/IJCNN.2009.5178922
– volume: 11
  start-page: 955
  year: 2010
  ident: ref5
  article-title: Generalized expectation criteria for semi-supervised learning with weakly labeled data
  publication-title: J Mach Learn Res
– year: 2008
  ident: ref46
  article-title: The EM algorithm extensions
  publication-title: Wiley Series in Probability and Statistics
– ident: ref12
  doi: 10.1109/TPAMI.2012.139
– volume: 12
  start-page: 1501
  year: 2011
  ident: ref8
  article-title: Learning from partial labels
  publication-title: J Mach Learn Res
– ident: ref22
  doi: 10.1089/cmb.2010.0034
– ident: ref35
  doi: 10.3115/v1/P14-1031
– start-page: 280
  year: 2007
  ident: ref29
  article-title: Guiding semi-supervision with constraint-driven learning
  publication-title: Proc Assoc Comp Ling (ACL)
– ident: ref54
  doi: 10.1088/1741-2560/13/3/036018
– start-page: 435
  year: 2002
  ident: ref15
  article-title: Active + semi-supervised learning = robust multi-view learning
  publication-title: Proc 19th Int Conf Mach Learn ICML
– ident: ref47
  doi: 10.1016/S0024-3795(99)00013-0
– start-page: 225
  year: 2012
  ident: ref13
  article-title: TrueLabel + Confusions: A spectrum of probabilistic models in analyzing multiple ratings
  publication-title: Proc 29th Int Conf Mach Learn (ICML)
– start-page: 1504
  year: 2010
  ident: ref18
  article-title: Learning from candidate labeling sets
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref50
  doi: 10.1109/ICCV.2019.00654
– ident: ref53
  doi: 10.1088/1741-2560/11/3/036003
– ident: ref31
  doi: 10.1145/1273496.1273571
– start-page: 52
  year: 1999
  ident: ref38
  article-title: Text classification by bootstrapping with keywords, EM and shrinkage
  publication-title: Proc Workshop Unsupervised Learn Natural Lang Process
– ident: ref1
  doi: 10.7551/mitpress/9780262033589.001.0001
– volume: 1
  start-page: 697
  year: 1972
  ident: ref7
  article-title: A missing information principle: Theory and applications
  publication-title: Proc 6th Berkeley Symp Math Stat Prob
– ident: ref20
  doi: 10.1109/ICDM.2011.84
– ident: ref23
  doi: 10.1111/j.0006-341X.2004.00156.x
– ident: ref16
  doi: 10.1007/11551188_45
– year: 2014
  ident: ref4
  article-title: Auto-encoding variational Bayes
  publication-title: arXiv 1312 6114
– start-page: 161
  year: 2000
  ident: ref14
  publication-title: EM Algorithm for Partially Known Labels
– ident: ref42
  doi: 10.18653/v1/P16-1228
SSID ssj0000605649
Score 2.3975866
Snippet This work studies the class of algorithms for learning with side-information that emerges by extending generative models with embedded context-related...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 3471
SubjectTerms Algorithms
Approximation algorithms
Artificial neural networks
Bayesian analysis
Brain-computer interfaces
Classification
Context
Context awareness
Context modeling
Deep learning
expectation–maximization (EM)
Extremities
finite mixture models (FMMs)
Learning systems
Machine learning
Mathematical models
maximum likelihood (ML)
Maximum likelihood estimation
Neural networks
Parameter estimation
Probabilistic logic
Probabilistic models
side information
unsupervised learning
variational autoencoder (VA)
Title Context-Aware Learning for Generative Models
URI https://ieeexplore.ieee.org/document/9163155
https://www.proquest.com/docview/2557978541
https://www.proquest.com/docview/2432861629
Volume 32
WOSCitedRecordID wos000681169500020&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Xplore Digital Libary (IEL)
  customDbUrl:
  eissn: 2162-2388
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000605649
  issn: 2162-237X
  databaseCode: RIE
  dateStart: 20120101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NS8MwFH9sw4MXp05xOkcFb66uTdt8HIc4PIwiOKW3kqaJCGOTfei_70vWFUQRvBWahPL6vn55yfsBXFsCJBEq6nN0_35saOIXidI-JnNJSUQQEUfJ8jJhacqzTDw2YFDfhdFau8Nn-tY-ulp-uVAbu1U2xFQmwvjXhCZjdHtXq95PCTAvpy7bJSElPolYtrsjE4jhNE0nT4gGCYJUV3qwDDERwWU4J99CkuNY-eGYXbQZt__3nYdwUGWV3mirBkfQ0PNjaO8YG7zKgDswcM2oEOuOPuVSe1Vz1VcPM1dv24Daej_PEqTNVifwPL6f3j34FV-CryJG1r4xpUYHpJNQUp4wFVCO2Z3UAS8LG6mZilkkQlMoYwSGccVKkUhiKI81LdFQT6E1X8z1GXgFDaQ0RSyIREiTEGmhCysCakwRch51IdyJLFdVM3HLaTHLHagIRO4knluJ55XEu3BTz3nfttL4c3THCrYeWcm0C73dn8kra1vlCIsYouEkxllX9Wu0E1v8kHO92OCYOCKcomqI899XvoB9Ys-ruMN9PWitlxt9CXvqY_22WvZR5TLedyr3BW-xzNc
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1ZS8QwEB68QF-8xfWs4JtW07S5HkUUxbUIrrJvJU0TEWRX9tC_7yTbLYgi-FZoUsokM_N9OeYDOPYCSCoxPJYY_uPMcRaXzNgYwRyrqCIpDZIsz22R57LbVQ8zcNrchbHWhsNn9sw_hr38qm_GfqnsHKFMivlvFuZZllEyua3VrKgQROY84F2acBrTVHSnt2SIOu_kefsR-SBFmho2H7xGTEoFwm1JvyWloLLyIzSHfHO98r8_XYXlGldGF5OJsAYztrcOK1PNhqh24Q04DeWokO1efOqBjeryqi8RYtdoUoLax7_IS6S9DTfh6fqqc3kT14oJsUkFHcXOVRZDkGWJ5pIJQ7hEfKctkVXpc7UwmUhV4krjnMJEbkSlmKaOy8zyCl11C-Z6_Z7dhqjkRGtXZopqJDWMak9eREm4c2UiZdqCZGqywtTlxL2qxVsRaAVRRbB44S1e1BZvwUnT531STOPP1hvesE3L2qYt2JuOTFH727BAYiSQD7MMex01r9FT_PaH7tn-GNtkKZUcp4ba-f3Lh7B407lvF-3b_G4Xlqg_vRKO-u3B3GgwtvuwYD5Gr8PBQZh4X2-TzzY
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Context-Aware+Learning+for+Generative+Models&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Perdikis%2C+Serafeim&rft.au=Leeb%2C+Robert&rft.au=Chavarriaga%2C+Ricardo&rft.au=Millan%2C+Jose+del+R.&rft.date=2021-08-01&rft.issn=2162-237X&rft.eissn=2162-2388&rft.volume=32&rft.issue=8&rft.spage=3471&rft.epage=3483&rft_id=info:doi/10.1109%2FTNNLS.2020.3011671&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TNNLS_2020_3011671
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon