A robust variational autoencoder using beta divergence

The presence of outliers can severely degrade learned representations and performance of deep learning methods and hence disproportionately affect the training process, leading to incorrect conclusions about the data. For example, anomaly detection using deep generative models is typically only poss...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Knowledge-based systems Ročník 238; s. 107886
Hlavní autoři: Akrami, Haleh, Joshi, Anand A., Li, Jian, Aydöre, Sergül, Leahy, Richard M.
Médium: Journal Article
Jazyk:angličtina
Vydáno: Netherlands Elsevier B.V 28.02.2022
Elsevier Science Ltd
Témata:
ISSN:0950-7051, 1872-7409
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract The presence of outliers can severely degrade learned representations and performance of deep learning methods and hence disproportionately affect the training process, leading to incorrect conclusions about the data. For example, anomaly detection using deep generative models is typically only possible when similar anomalies (or outliers) are not present in the training data. Here we focus on variational autoencoders (VAEs). While the VAE is a popular framework for anomaly detection tasks, we observe that the VAE is unable to detect outliers when the training data contains anomalies that have the same distribution as those in test data. In this paper we focus on robustness to outliers in training data in VAE settings using concepts from robust statistics. We propose a variational lower bound that leads to a robust VAE model that has the same computational complexity as the standard VAE and contains a single automatically-adjusted tuning parameter to control the degree of robustness. We present mathematical formulations for robust variational autoencoders (RVAEs) for Bernoulli, Gaussian and categorical variables. The RVAE model is based on beta-divergence rather than the standard Kullback–Leibler (KL) divergence. We demonstrate the performance of our proposed β-divergence-based autoencoder for a variety of image and categorical datasets showing improved robustness to outliers both qualitatively and quantitatively. We also illustrate the use of our robust VAE for detection of lesions in brain images, formulated as an anomaly detection task. Finally, we suggest a method to tune the hyperparameter of RVAE which makes our model completely unsupervised.
AbstractList The presence of outliers can severely degrade learned representations and performance of deep learning methods and hence disproportionately affect the training process, leading to incorrect conclusions about the data. For example, anomaly detection using deep generative models is typically only possible when similar anomalies (or outliers) are not present in the training data. Here we focus on variational autoencoders (VAEs). While the VAE is a popular framework for anomaly detection tasks, we observe that the VAE is unable to detect outliers when the training data contains anomalies that have the same distribution as those in test data. In this paper we focus on robustness to outliers in training data in VAE settings using concepts from robust statistics. We propose a variational lower bound that leads to a robust VAE model that has the same computational complexity as the standard VAE and contains a single automatically-adjusted tuning parameter to control the degree of robustness. We present mathematical formulations for robust variational autoencoders (RVAEs) for Bernoulli, Gaussian and categorical variables. The RVAE model is based on beta-divergence rather than the standard Kullback-Leibler (KL) divergence. We demonstrate the performance of our proposed -divergence-based autoencoder for a variety of image and categorical datasets showing improved robustness to outliers both qualitatively and quantitatively. We also illustrate the use of our robust VAE for detection of lesions in brain images, formulated as an anomaly detection task. Finally, we suggest a method to tune the hyperparameter of RVAE which makes our model completely unsupervised.
The presence of outliers can severely degrade learned representations and performance of deep learning methods and hence disproportionately affect the training process, leading to incorrect conclusions about the data. For example, anomaly detection using deep generative models is typically only possible when similar anomalies (or outliers) are not present in the training data. Here we focus on variational autoencoders (VAEs). While the VAE is a popular framework for anomaly detection tasks, we observe that the VAE is unable to detect outliers when the training data contains anomalies that have the same distribution as those in test data. In this paper we focus on robustness to outliers in training data in VAE settings using concepts from robust statistics. We propose a variational lower bound that leads to a robust VAE model that has the same computational complexity as the standard VAE and contains a single automatically-adjusted tuning parameter to control the degree of robustness. We present mathematical formulations for robust variational autoencoders (RVAEs) for Bernoulli, Gaussian and categorical variables. The RVAE model is based on beta-divergence rather than the standard Kullback–Leibler (KL) divergence. We demonstrate the performance of our proposed β-divergence-based autoencoder for a variety of image and categorical datasets showing improved robustness to outliers both qualitatively and quantitatively. We also illustrate the use of our robust VAE for detection of lesions in brain images, formulated as an anomaly detection task. Finally, we suggest a method to tune the hyperparameter of RVAE which makes our model completely unsupervised.
The presence of outliers can severely degrade learned representations and performance of deep learning methods and hence disproportionately affect the training process, leading to incorrect conclusions about the data. For example, anomaly detection using deep generative models is typically only possible when similar anomalies (or outliers) are not present in the training data. Here we focus on variational autoencoders (VAEs). While the VAE is a popular framework for anomaly detection tasks, we observe that the VAE is unable to detect outliers when the training data contains anomalies that have the same distribution as those in test data. In this paper we focus on robustness to outliers in training data in VAE settings using concepts from robust statistics. We propose a variational lower bound that leads to a robust VAE model that has the same computational complexity as the standard VAE and contains a single automatically-adjusted tuning parameter to control the degree of robustness. We present mathematical formulations for robust variational autoencoders (RVAEs) for Bernoulli, Gaussian and categorical variables. The RVAE model is based on beta-divergence rather than the standard Kullback-Leibler (KL) divergence. We demonstrate the performance of our proposed β-divergence-based autoencoder for a variety of image and categorical datasets showing improved robustness to outliers both qualitatively and quantitatively. We also illustrate the use of our robust VAE for detection of lesions in brain images, formulated as an anomaly detection task. Finally, we suggest a method to tune the hyperparameter of RVAE which makes our model completely unsupervised.The presence of outliers can severely degrade learned representations and performance of deep learning methods and hence disproportionately affect the training process, leading to incorrect conclusions about the data. For example, anomaly detection using deep generative models is typically only possible when similar anomalies (or outliers) are not present in the training data. Here we focus on variational autoencoders (VAEs). While the VAE is a popular framework for anomaly detection tasks, we observe that the VAE is unable to detect outliers when the training data contains anomalies that have the same distribution as those in test data. In this paper we focus on robustness to outliers in training data in VAE settings using concepts from robust statistics. We propose a variational lower bound that leads to a robust VAE model that has the same computational complexity as the standard VAE and contains a single automatically-adjusted tuning parameter to control the degree of robustness. We present mathematical formulations for robust variational autoencoders (RVAEs) for Bernoulli, Gaussian and categorical variables. The RVAE model is based on beta-divergence rather than the standard Kullback-Leibler (KL) divergence. We demonstrate the performance of our proposed β-divergence-based autoencoder for a variety of image and categorical datasets showing improved robustness to outliers both qualitatively and quantitatively. We also illustrate the use of our robust VAE for detection of lesions in brain images, formulated as an anomaly detection task. Finally, we suggest a method to tune the hyperparameter of RVAE which makes our model completely unsupervised.
ArticleNumber 107886
Author Joshi, Anand A.
Aydöre, Sergül
Leahy, Richard M.
Akrami, Haleh
Li, Jian
Author_xml – sequence: 1
  givenname: Haleh
  orcidid: 0000-0002-1678-8926
  surname: Akrami
  fullname: Akrami, Haleh
  email: akrami@usc.edu
  organization: Signal and Image Processing Institute, University of Southern California, Los Angeles, CA, USA
– sequence: 2
  givenname: Anand A.
  orcidid: 0000-0002-9582-3848
  surname: Joshi
  fullname: Joshi, Anand A.
  email: ajoshi@usc.edu
  organization: Signal and Image Processing Institute, University of Southern California, Los Angeles, CA, USA
– sequence: 3
  givenname: Jian
  orcidid: 0000-0002-1691-8727
  surname: Li
  fullname: Li, Jian
  email: jli112@mgh.harvard.edu
  organization: Athinoula A. Martinos Center for Biomedical Imaging Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
– sequence: 4
  givenname: Sergül
  surname: Aydöre
  fullname: Aydöre, Sergül
  email: sergulaydore@gmail.com
  organization: Amazon Web Services, New York, NY, USA
– sequence: 5
  givenname: Richard M.
  orcidid: 0000-0002-7278-5471
  surname: Leahy
  fullname: Leahy, Richard M.
  email: leahy@sipi.usc.edu
  organization: Signal and Image Processing Institute, University of Southern California, Los Angeles, CA, USA
BackLink https://www.ncbi.nlm.nih.gov/pubmed/36714396$$D View this record in MEDLINE/PubMed
BookMark eNqFkE1LHTEUhkOx1KvtP5Ay4Kabuc3X5MOFcBH7AUI3ug6Z5IzkOndik8wF_31zHd24sKsD5zzvC-c5QUdTnAChM4LXBBPxfbt-mGJ-ymuKKakrqZT4gFZESdpKjvURWmHd4Vbijhyjk5y3GGNKifqEjpmQhDMtVkhsmhT7OZdmb1OwJcTJjo2dS4TJRQ-pmXOY7pseim182EO6rwf4jD4Odszw5WWeorsf17dXv9qbPz9_X21uWseZKK3quHAD99rD0IPUVPe8IyCsEz23vKNSOEYtiK5nWhNusbZkIH6QomOEenaKvi29jyn-nSEXswvZwTjaCeKcDZWSYKWZkhU9f4Nu45zqN5USTGFFdCcq9fWFmvsdePOYws6mJ_NqpAIXC-BSzDnBYFwoz15KsmE0BJuDfrM1i35z0G8W_TXM34Rf-_8Tu1xiUFXuAySTXTho9iGBK8bH8H7BPwoQn24
CitedBy_id crossref_primary_10_1016_j_eswa_2023_121074
crossref_primary_10_1016_j_automatica_2024_112108
crossref_primary_10_1016_j_cose_2023_103251
crossref_primary_10_26599_CVM_2025_9450403
crossref_primary_10_1016_j_engappai_2023_106684
crossref_primary_10_1109_ACCESS_2025_3594877
crossref_primary_10_1007_s11368_024_03801_1
crossref_primary_10_1016_j_future_2024_107630
crossref_primary_10_1109_TASE_2024_3486688
crossref_primary_10_3390_app15115830
crossref_primary_10_1051_epjconf_202429509033
crossref_primary_10_1016_j_eswa_2023_120214
crossref_primary_10_1016_j_ascom_2023_100739
crossref_primary_10_1016_j_neucom_2025_131423
crossref_primary_10_1007_s00500_025_10702_z
crossref_primary_10_1016_j_knosys_2023_110287
crossref_primary_10_2196_77893
crossref_primary_10_1016_j_chemolab_2024_105276
crossref_primary_10_1002_hbm_70075
crossref_primary_10_1016_j_swevo_2024_101520
crossref_primary_10_1109_TIA_2025_3549413
Cites_doi 10.1126/science.1127647
10.1080/00031305.1988.10475585
10.1080/03610928808829834
10.3390/e12061532
10.1016/j.media.2016.07.009
10.1109/5.726791
10.1016/j.media.2020.101713
10.1109/MCSE.2011.37
10.1109/ICCV.2019.01037
10.1093/biomet/85.3.549
10.3390/e12020262
10.1093/neuros/nyx103
10.1080/00949659408811609
10.1609/aaai.v31i1.10777
10.1016/0377-0427(87)90125-7
ContentType Journal Article
Copyright 2021 Elsevier B.V.
Copyright Elsevier Science Ltd. Feb 28, 2022
Copyright_xml – notice: 2021 Elsevier B.V.
– notice: Copyright Elsevier Science Ltd. Feb 28, 2022
DBID AAYXX
CITATION
NPM
7SC
8FD
E3H
F2A
JQ2
L7M
L~C
L~D
7X8
DOI 10.1016/j.knosys.2021.107886
DatabaseName CrossRef
PubMed
Computer and Information Systems Abstracts
Technology Research Database
Library & Information Sciences Abstracts (LISA)
Library & Information Science Abstracts (LISA)
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Library and Information Science Abstracts (LISA)
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList PubMed

MEDLINE - Academic
Technology Research Database
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
Statistics
EISSN 1872-7409
ExternalDocumentID 36714396
10_1016_j_knosys_2021_107886
S0950705121010534
Genre Journal Article
GrantInformation_xml – fundername: NINDS NIH HHS
  grantid: R01 NS074980
– fundername: NIBIB NIH HHS
  grantid: R01 EB026299
GroupedDBID --K
--M
.DC
.~1
0R~
1B1
1~.
1~5
4.4
457
4G.
5VS
7-5
71M
77K
8P~
9JN
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAXUO
AAYFN
ABAOU
ABBOA
ABIVO
ABJNI
ABMAC
ABYKQ
ACAZW
ACDAQ
ACGFS
ACRLP
ACZNC
ADBBV
ADEZE
ADGUI
ADTZH
AEBSH
AECPX
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ARUGR
AXJTR
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EO8
EO9
EP2
EP3
FDB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
IHE
J1W
JJJVA
KOM
LG9
LY7
M41
MHUIS
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
PQQKQ
Q38
ROL
RPZ
SDF
SDG
SDP
SES
SPC
SPCBC
SST
SSV
SSW
SSZ
T5K
WH7
XPP
ZMT
~02
~G-
29L
77I
9DU
AAQXK
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ABXDB
ACLOT
ACNNM
ACRPL
ACVFH
ADCNI
ADJOM
ADMUD
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
ASPBG
AVWKF
AZFZN
CITATION
EFKBS
EJD
FEDTE
FGOYB
G-2
HLZ
HVGLF
HZ~
R2-
SBC
SET
SEW
UHS
WUQ
~HD
AFXIZ
AGCQF
AGRNS
BNPGV
NPM
SSH
7SC
8FD
E3H
F2A
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c436t-8546cf4d9defbe7929b451e6ac6b4a45276c32ae65b39914a09a1f1df765312d3
ISICitedReferencesCount 31
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000779159800013&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0950-7051
IngestDate Sun Sep 28 01:45:17 EDT 2025
Fri Nov 14 18:45:26 EST 2025
Mon Jul 21 06:04:55 EDT 2025
Tue Nov 18 21:55:43 EST 2025
Sat Nov 29 07:07:00 EST 2025
Fri Feb 23 02:41:34 EST 2024
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords Outlier
β divergence
Robust anomaly detection
VAE
RVAE
β divergence
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c436t-8546cf4d9defbe7929b451e6ac6b4a45276c32ae65b39914a09a1f1df765312d3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-9582-3848
0000-0002-1678-8926
0000-0002-1691-8727
0000-0002-7278-5471
OpenAccessLink https://www.ncbi.nlm.nih.gov/pmc/articles/9881733
PMID 36714396
PQID 2638081956
PQPubID 2035257
ParticipantIDs proquest_miscellaneous_2771089387
proquest_journals_2638081956
pubmed_primary_36714396
crossref_citationtrail_10_1016_j_knosys_2021_107886
crossref_primary_10_1016_j_knosys_2021_107886
elsevier_sciencedirect_doi_10_1016_j_knosys_2021_107886
PublicationCentury 2000
PublicationDate 2022-02-28
PublicationDateYYYYMMDD 2022-02-28
PublicationDate_xml – month: 02
  year: 2022
  text: 2022-02-28
  day: 28
PublicationDecade 2020
PublicationPlace Netherlands
PublicationPlace_xml – name: Netherlands
– name: Amsterdam
PublicationTitle Knowledge-based systems
PublicationTitleAlternate Knowl Based Syst
PublicationYear 2022
Publisher Elsevier B.V
Elsevier Science Ltd
Publisher_xml – name: Elsevier B.V
– name: Elsevier Science Ltd
References Larsen, Sønderby, Larochelle, Winther (b39) 2015
Pawlowski, Lee, Rajchl, McDonagh, Ferrante, Kamnitsas, Cooke, Stevenson, Khetani, Newman (b13) 2018
Chen, You, Tezcan, Konukoglu (b15) 2020
An, Cho (b10) 2015
Zhou, Paffenroth (b20) 2017
Kusner (b7) 2017
Vincent (b18) 2008
Walt, Colbert, Varoquaux (b42) 2011; 13
D. Im Im, S. Ahn, R. Memisevic, Y. Bengio, Denoising criterion for variational auto-encoding framework, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 31, 2017.
Kingma, Ba (b43) 2014
Basu, Harris, Hjort, Jones (b23) 1998; 85
Cichocki, Amari (b28) 2010; 12
X. Ma, A.R. Triki, M. Berman, C. Sagonas, J. Cali, M.B. Blaschko, A Bayesian optimization framework for neural network compression, in: Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 10274–10283.
(b36) 2020
Hsu, Zhang, Glass (b9) 2017
Eguchi, Kato (b24) 2010; 12
Brent (b45) 2013
LeCun, Bottou, Bengio, Haffner (b30) 1998; 86
(b34) 1999
Pu, Gan, Henao, Yuan, Li, Stevens, Carin (b8) 2016
Loaiza-Ganem, Cunningham (b44) 2019
Gather, Kale (b2) 1988; 17
(b35) 2020
Qi, Wang, Zheng, Wu (b19) 2014
Hinton, Salakhutdinov (b1) 2006; 313
Baur, Wiestler, Albarqouni, Navab (b12) 2018
Wingate, Weber (b29) 2013
Cao, Li, Nelson, Kon (b22) 2019
You, Tezcan, Chen, Konukoglu (b11) 2019
Eduardo (b17) 2019
Futami, Sato, Sugiyama (b25) 2017
Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, Bengio (b53) 2014
Cohen, Afshar, Tapson, van Schaik (b31) 2017
Maier (b33) 2017; 35
Kingma, Welling (b5) 2013
Nalisnick, Matsukawa, Teh, Gorur, Lakshminarayanan (b16) 2018
Villanueva-Meyer, Mabray, Cha (b37) 2017; 81
Xiao, Rasul, Vollgraf (b32) 2017
Paszke, Gross, Chintala, Chanan, Yang, DeVito, Lin, Desmaison, Antiga, Lerer (b40) 2017
Rousseeuw (b49) 1987; 20
Zimmerer, Kohl, Petersen, Isensee, Maier-Hein (b14) 2018
Zellner (b27) 1988; 42
Chen, Konukoglu (b50) 2018
Press, Flannery, Teukolsky, Vetterling (b46) 1989
Bishop (b6) 2006
Dewancker, McCourt, Clark (b47) 2016
Basu, Sarkar (b51) 1994; 50
Dai, Wipf (b52) 2019
Dai, Wang, Aston, Hua, Wipf (b26) 2018; 19
Huber (b3) 2011
Zhai, Chen, Zhang, Wang (b21) 2017
Hampel, Ronchetti, Rousseeuw, Stahel (b4) 1986
Pedregosa, Varoquaux, Gramfort, Michel, Thirion, Grisel, Blondel, Prettenhofer, Weiss, Dubourg (b41) 2011; 12
Dewancker (10.1016/j.knosys.2021.107886_b47) 2016
Loaiza-Ganem (10.1016/j.knosys.2021.107886_b44) 2019
Rousseeuw (10.1016/j.knosys.2021.107886_b49) 1987; 20
You (10.1016/j.knosys.2021.107886_b11) 2019
Kusner (10.1016/j.knosys.2021.107886_b7) 2017
Zhai (10.1016/j.knosys.2021.107886_b21) 2017
Cohen (10.1016/j.knosys.2021.107886_b31) 2017
Walt (10.1016/j.knosys.2021.107886_b42) 2011; 13
10.1016/j.knosys.2021.107886_b38
Huber (10.1016/j.knosys.2021.107886_b3) 2011
Paszke (10.1016/j.knosys.2021.107886_b40) 2017
Hsu (10.1016/j.knosys.2021.107886_b9) 2017
Kingma (10.1016/j.knosys.2021.107886_b43) 2014
(10.1016/j.knosys.2021.107886_b34) 1999
Baur (10.1016/j.knosys.2021.107886_b12) 2018
Chen (10.1016/j.knosys.2021.107886_b15) 2020
(10.1016/j.knosys.2021.107886_b36) 2020
Larsen (10.1016/j.knosys.2021.107886_b39) 2015
Dai (10.1016/j.knosys.2021.107886_b26) 2018; 19
Pedregosa (10.1016/j.knosys.2021.107886_b41) 2011; 12
Gather (10.1016/j.knosys.2021.107886_b2) 1988; 17
LeCun (10.1016/j.knosys.2021.107886_b30) 1998; 86
Villanueva-Meyer (10.1016/j.knosys.2021.107886_b37) 2017; 81
Basu (10.1016/j.knosys.2021.107886_b51) 1994; 50
Goodfellow (10.1016/j.knosys.2021.107886_b53) 2014
Bishop (10.1016/j.knosys.2021.107886_b6) 2006
Eduardo (10.1016/j.knosys.2021.107886_b17) 2019
Press (10.1016/j.knosys.2021.107886_b46) 1989
Hinton (10.1016/j.knosys.2021.107886_b1) 2006; 313
Cichocki (10.1016/j.knosys.2021.107886_b28) 2010; 12
(10.1016/j.knosys.2021.107886_b35) 2020
Maier (10.1016/j.knosys.2021.107886_b33) 2017; 35
Zellner (10.1016/j.knosys.2021.107886_b27) 1988; 42
An (10.1016/j.knosys.2021.107886_b10) 2015
Qi (10.1016/j.knosys.2021.107886_b19) 2014
Eguchi (10.1016/j.knosys.2021.107886_b24) 2010; 12
Hampel (10.1016/j.knosys.2021.107886_b4) 1986
Futami (10.1016/j.knosys.2021.107886_b25) 2017
Brent (10.1016/j.knosys.2021.107886_b45) 2013
Basu (10.1016/j.knosys.2021.107886_b23) 1998; 85
Zimmerer (10.1016/j.knosys.2021.107886_b14) 2018
Zhou (10.1016/j.knosys.2021.107886_b20) 2017
Wingate (10.1016/j.knosys.2021.107886_b29) 2013
Kingma (10.1016/j.knosys.2021.107886_b5) 2013
Chen (10.1016/j.knosys.2021.107886_b50) 2018
Dai (10.1016/j.knosys.2021.107886_b52) 2019
Pawlowski (10.1016/j.knosys.2021.107886_b13) 2018
Pu (10.1016/j.knosys.2021.107886_b8) 2016
Cao (10.1016/j.knosys.2021.107886_b22) 2019
Nalisnick (10.1016/j.knosys.2021.107886_b16) 2018
Xiao (10.1016/j.knosys.2021.107886_b32) 2017
10.1016/j.knosys.2021.107886_b48
Vincent (10.1016/j.knosys.2021.107886_b18) 2008
References_xml – volume: 85
  start-page: 549
  year: 1998
  end-page: 559
  ident: b23
  article-title: Robust and efficient estimation by minimising a density power divergence
  publication-title: Biometrika
– year: 1986
  ident: b4
  article-title: Robust Statistics
– year: 2018
  ident: b50
  article-title: Unsupervised detection of lesions in brain MRI using constrained adversarial auto-encoders
– start-page: 1945
  year: 2017
  end-page: 1954
  ident: b7
  article-title: Grammar variational autoencoder
  publication-title: Proceedings of the 34th International Conference on Machine Learning-Volume 70
– year: 2018
  ident: b13
  article-title: Unsupervised lesion detection in brain CT using bayesian convolutional autoencoders
  publication-title: MIDL, Abstract Track, Non-Archival
– volume: 50
  start-page: 173
  year: 1994
  end-page: 185
  ident: b51
  article-title: The trade-off between robustness and efficiency and the effect of model smoothing in minimum disparity inference
  publication-title: J. Stat. Comput. Simul.
– volume: 81
  start-page: 397
  year: 2017
  end-page: 415
  ident: b37
  article-title: Current clinical brain tumor imaging
  publication-title: Neurosurgery
– year: 2020
  ident: b15
  article-title: Unsupervised lesion detection via image restoration with a normative prior
  publication-title: Med. Image Anal.
– volume: 12
  start-page: 262
  year: 2010
  end-page: 274
  ident: b24
  article-title: Entropy and divergence associated with power function and the statistical application
  publication-title: Entropy
– year: 2006
  ident: b6
  article-title: Pattern Recognition and Machine Learning
– start-page: 161
  year: 2018
  end-page: 169
  ident: b12
  article-title: Deep autoencoding models for unsupervised anomaly segmentation in brain mr images
  publication-title: International MICCAI Brainlesion Workshop
– start-page: 13266
  year: 2019
  end-page: 13276
  ident: b44
  article-title: The continuous Bernoulli: fixing a pervasive error in variational autoencoders
  publication-title: Advances in Neural Information Processing Systems
– year: 2015
  ident: b39
  article-title: Autoencoding beyond pixels using a learned similarity metric
– volume: 12
  start-page: 2825
  year: 2011
  end-page: 2830
  ident: b41
  article-title: Scikit-learn: Machine learning in Python
  publication-title: J. Mach. Learn. Res.
– year: 2014
  ident: b43
  article-title: Adam: A method for stochastic optimization
– volume: 86
  start-page: 2278
  year: 1998
  end-page: 2324
  ident: b30
  article-title: Gradient-based learning applied to document recognition
  publication-title: Proc. IEEE
– year: 2013
  ident: b29
  article-title: Automated variational inference in probabilistic programming
– year: 2013
  ident: b45
  article-title: Algorithms for Minimization Without Derivatives
– volume: 12
  start-page: 1532
  year: 2010
  end-page: 1568
  ident: b28
  article-title: Families of alpha-beta-and gamma-divergences: Flexible and robust measures of similarities
  publication-title: Entropy
– year: 1999
  ident: b34
  article-title: KDD cup 1999 data
– volume: 313
  start-page: 504
  year: 2006
  end-page: 507
  ident: b1
  article-title: Reducing the dimensionality of data with neural networks
  publication-title: Science
– start-page: 6716
  year: 2014
  end-page: 6720
  ident: b19
  article-title: Robust feature learning by stacked autoencoder with maximum correntropy criterion
  publication-title: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
– start-page: 540
  year: 2019
  end-page: 556
  ident: b11
  article-title: Unsupervised lesion detection via image restoration with a normative prior
  publication-title: International Conference on Medical Imaging with Deep Learning
– year: 2019
  ident: b22
  article-title: Coupled VAE: Improved accuracy and robustness of a variational autoencoder
– year: 1989
  ident: b46
  article-title: Numerical Recipes, Vol. 3
– start-page: 2352
  year: 2016
  end-page: 2360
  ident: b8
  article-title: Variational autoencoder for deep learning of images, labels and captions
  publication-title: Advances in Neural Information Processing Systems
– year: 2016
  ident: b47
  article-title: Bayesian optimization for machine learning: A practical guidebook
– volume: 13
  start-page: 22
  year: 2011
  end-page: 30
  ident: b42
  article-title: The NumPy array: a structure for efficient numerical computation
  publication-title: Comput. Sci. Eng.
– start-page: 2672
  year: 2014
  end-page: 2680
  ident: b53
  article-title: Generative adversarial nets
  publication-title: Advances in Neural Information Processing Systems
– year: 2019
  ident: b17
  article-title: Robust variational autoencoders for outlier detection in mixed-type data
– year: 2017
  ident: b9
  article-title: Learning latent representations for speech generation and transformation
– start-page: 356
  year: 2017
  end-page: 367
  ident: b21
  article-title: Robust variational auto-encoder for radar HRRP target recognition
  publication-title: International Conference on Intelligent Science and Big Data Engineering
– volume: 20
  start-page: 53
  year: 1987
  end-page: 65
  ident: b49
  article-title: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis
  publication-title: J. Comput. Appl. Math.
– year: 2017
  ident: b25
  article-title: Variational inference based on robust divergences
– volume: 42
  start-page: 278
  year: 1988
  end-page: 280
  ident: b27
  article-title: Optimal information processing and Bayes’s theorem
  publication-title: Amer. Statist.
– year: 2013
  ident: b5
  article-title: Auto-encoding variational Bayes
– reference: X. Ma, A.R. Triki, M. Berman, C. Sagonas, J. Cali, M.B. Blaschko, A Bayesian optimization framework for neural network compression, in: Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 10274–10283.
– volume: 19
  start-page: 1573
  year: 2018
  end-page: 1614
  ident: b26
  article-title: Connections with robust PCA and the role of emergent sparsity in variational autoencoder models
  publication-title: J. Mach. Learn. Res.
– volume: 35
  start-page: 250
  year: 2017
  end-page: 269
  ident: b33
  article-title: ISLES 2015-A public evaluation benchmark for ischemic stroke lesion segmentation from multispectral MRI
  publication-title: Med. Image Anal.
– year: 2020
  ident: b35
  article-title: NSL-KDD dataset
– start-page: 665
  year: 2017
  end-page: 674
  ident: b20
  article-title: Anomaly detection with robust deep autoencoders
  publication-title: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
– start-page: 1
  year: 2015
  end-page: 18
  ident: b10
  article-title: Variational autoencoder based anomaly detection using reconstruction probability
  publication-title: Special Lecture on IE, Vol. 2
– year: 2018
  ident: b16
  article-title: Do deep generative models know what they don’t know?
– year: 2017
  ident: b32
  article-title: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms
– start-page: 1096
  year: 2008
  end-page: 1103
  ident: b18
  article-title: Extracting and composing robust features with denoising autoencoders
  publication-title: Proceedings of the 25th International Conference on Machine Learning
– volume: 17
  start-page: 3767
  year: 1988
  end-page: 3784
  ident: b2
  article-title: Maximum likelihood estimation in the presence of outiliers
  publication-title: Comm. Statist. Theory Methods
– year: 2018
  ident: b14
  article-title: Context-encoding variational autoencoder for unsupervised anomaly detection
– year: 2017
  ident: b31
  article-title: EMNIST: an extension of MNIST to handwritten letters
– year: 2017
  ident: b40
  article-title: Automatic differentiation in pytorch
  publication-title: NIPS-W
– year: 2019
  ident: b52
  article-title: Diagnosing and enhancing VAE models
– year: 2020
  ident: b36
  article-title: UNSW-NB15
– reference: D. Im Im, S. Ahn, R. Memisevic, Y. Bengio, Denoising criterion for variational auto-encoding framework, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 31, 2017.
– year: 2011
  ident: b3
  article-title: Robust Statistics
– start-page: 540
  year: 2019
  ident: 10.1016/j.knosys.2021.107886_b11
  article-title: Unsupervised lesion detection via image restoration with a normative prior
– year: 2017
  ident: 10.1016/j.knosys.2021.107886_b40
  article-title: Automatic differentiation in pytorch
– volume: 313
  start-page: 504
  issue: 5786
  year: 2006
  ident: 10.1016/j.knosys.2021.107886_b1
  article-title: Reducing the dimensionality of data with neural networks
  publication-title: Science
  doi: 10.1126/science.1127647
– volume: 42
  start-page: 278
  issue: 4
  year: 1988
  ident: 10.1016/j.knosys.2021.107886_b27
  article-title: Optimal information processing and Bayes’s theorem
  publication-title: Amer. Statist.
  doi: 10.1080/00031305.1988.10475585
– year: 2017
  ident: 10.1016/j.knosys.2021.107886_b32
– year: 2016
  ident: 10.1016/j.knosys.2021.107886_b47
– year: 2013
  ident: 10.1016/j.knosys.2021.107886_b5
– start-page: 356
  year: 2017
  ident: 10.1016/j.knosys.2021.107886_b21
  article-title: Robust variational auto-encoder for radar HRRP target recognition
– start-page: 13266
  year: 2019
  ident: 10.1016/j.knosys.2021.107886_b44
  article-title: The continuous Bernoulli: fixing a pervasive error in variational autoencoders
– volume: 17
  start-page: 3767
  issue: 11
  year: 1988
  ident: 10.1016/j.knosys.2021.107886_b2
  article-title: Maximum likelihood estimation in the presence of outiliers
  publication-title: Comm. Statist. Theory Methods
  doi: 10.1080/03610928808829834
– year: 2013
  ident: 10.1016/j.knosys.2021.107886_b45
– year: 2019
  ident: 10.1016/j.knosys.2021.107886_b52
– start-page: 1
  year: 2015
  ident: 10.1016/j.knosys.2021.107886_b10
  article-title: Variational autoencoder based anomaly detection using reconstruction probability
– volume: 12
  start-page: 1532
  issue: 6
  year: 2010
  ident: 10.1016/j.knosys.2021.107886_b28
  article-title: Families of alpha-beta-and gamma-divergences: Flexible and robust measures of similarities
  publication-title: Entropy
  doi: 10.3390/e12061532
– year: 1999
  ident: 10.1016/j.knosys.2021.107886_b34
– year: 2020
  ident: 10.1016/j.knosys.2021.107886_b35
– volume: 35
  start-page: 250
  year: 2017
  ident: 10.1016/j.knosys.2021.107886_b33
  article-title: ISLES 2015-A public evaluation benchmark for ischemic stroke lesion segmentation from multispectral MRI
  publication-title: Med. Image Anal.
  doi: 10.1016/j.media.2016.07.009
– year: 2020
  ident: 10.1016/j.knosys.2021.107886_b36
– year: 2018
  ident: 10.1016/j.knosys.2021.107886_b50
– year: 2015
  ident: 10.1016/j.knosys.2021.107886_b39
– year: 2017
  ident: 10.1016/j.knosys.2021.107886_b25
– start-page: 161
  year: 2018
  ident: 10.1016/j.knosys.2021.107886_b12
  article-title: Deep autoencoding models for unsupervised anomaly segmentation in brain mr images
– year: 2019
  ident: 10.1016/j.knosys.2021.107886_b17
– volume: 19
  start-page: 1573
  issue: 1
  year: 2018
  ident: 10.1016/j.knosys.2021.107886_b26
  article-title: Connections with robust PCA and the role of emergent sparsity in variational autoencoder models
  publication-title: J. Mach. Learn. Res.
– year: 1986
  ident: 10.1016/j.knosys.2021.107886_b4
– year: 2018
  ident: 10.1016/j.knosys.2021.107886_b16
– start-page: 2672
  year: 2014
  ident: 10.1016/j.knosys.2021.107886_b53
  article-title: Generative adversarial nets
– volume: 86
  start-page: 2278
  issue: 11
  year: 1998
  ident: 10.1016/j.knosys.2021.107886_b30
  article-title: Gradient-based learning applied to document recognition
  publication-title: Proc. IEEE
  doi: 10.1109/5.726791
– year: 2020
  ident: 10.1016/j.knosys.2021.107886_b15
  article-title: Unsupervised lesion detection via image restoration with a normative prior
  publication-title: Med. Image Anal.
  doi: 10.1016/j.media.2020.101713
– year: 1989
  ident: 10.1016/j.knosys.2021.107886_b46
– volume: 13
  start-page: 22
  issue: 2
  year: 2011
  ident: 10.1016/j.knosys.2021.107886_b42
  article-title: The NumPy array: a structure for efficient numerical computation
  publication-title: Comput. Sci. Eng.
  doi: 10.1109/MCSE.2011.37
– ident: 10.1016/j.knosys.2021.107886_b48
  doi: 10.1109/ICCV.2019.01037
– start-page: 2352
  year: 2016
  ident: 10.1016/j.knosys.2021.107886_b8
  article-title: Variational autoencoder for deep learning of images, labels and captions
– year: 2014
  ident: 10.1016/j.knosys.2021.107886_b43
– year: 2018
  ident: 10.1016/j.knosys.2021.107886_b13
  article-title: Unsupervised lesion detection in brain CT using bayesian convolutional autoencoders
– volume: 85
  start-page: 549
  issue: 3
  year: 1998
  ident: 10.1016/j.knosys.2021.107886_b23
  article-title: Robust and efficient estimation by minimising a density power divergence
  publication-title: Biometrika
  doi: 10.1093/biomet/85.3.549
– volume: 12
  start-page: 262
  issue: 2
  year: 2010
  ident: 10.1016/j.knosys.2021.107886_b24
  article-title: Entropy and divergence associated with power function and the statistical application
  publication-title: Entropy
  doi: 10.3390/e12020262
– year: 2017
  ident: 10.1016/j.knosys.2021.107886_b9
– year: 2019
  ident: 10.1016/j.knosys.2021.107886_b22
– volume: 81
  start-page: 397
  issue: 3
  year: 2017
  ident: 10.1016/j.knosys.2021.107886_b37
  article-title: Current clinical brain tumor imaging
  publication-title: Neurosurgery
  doi: 10.1093/neuros/nyx103
– year: 2006
  ident: 10.1016/j.knosys.2021.107886_b6
– year: 2018
  ident: 10.1016/j.knosys.2021.107886_b14
– volume: 50
  start-page: 173
  issue: 3–4
  year: 1994
  ident: 10.1016/j.knosys.2021.107886_b51
  article-title: The trade-off between robustness and efficiency and the effect of model smoothing in minimum disparity inference
  publication-title: J. Stat. Comput. Simul.
  doi: 10.1080/00949659408811609
– start-page: 1096
  year: 2008
  ident: 10.1016/j.knosys.2021.107886_b18
  article-title: Extracting and composing robust features with denoising autoencoders
– start-page: 665
  year: 2017
  ident: 10.1016/j.knosys.2021.107886_b20
  article-title: Anomaly detection with robust deep autoencoders
– ident: 10.1016/j.knosys.2021.107886_b38
  doi: 10.1609/aaai.v31i1.10777
– year: 2017
  ident: 10.1016/j.knosys.2021.107886_b31
– start-page: 6716
  year: 2014
  ident: 10.1016/j.knosys.2021.107886_b19
  article-title: Robust feature learning by stacked autoencoder with maximum correntropy criterion
– start-page: 1945
  year: 2017
  ident: 10.1016/j.knosys.2021.107886_b7
  article-title: Grammar variational autoencoder
– volume: 12
  start-page: 2825
  issue: Oct
  year: 2011
  ident: 10.1016/j.knosys.2021.107886_b41
  article-title: Scikit-learn: Machine learning in Python
  publication-title: J. Mach. Learn. Res.
– volume: 20
  start-page: 53
  year: 1987
  ident: 10.1016/j.knosys.2021.107886_b49
  article-title: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis
  publication-title: J. Comput. Appl. Math.
  doi: 10.1016/0377-0427(87)90125-7
– year: 2011
  ident: 10.1016/j.knosys.2021.107886_b3
– year: 2013
  ident: 10.1016/j.knosys.2021.107886_b29
SSID ssj0002218
Score 2.5187771
Snippet The presence of outliers can severely degrade learned representations and performance of deep learning methods and hence disproportionately affect the training...
SourceID proquest
pubmed
crossref
elsevier
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 107886
SubjectTerms Anomalies
Brain damage
Data
Data analysis
Deep learning
Lesions
Lower bounds
Neuroimaging
Outlier
Outliers (statistics)
Robust anomaly detection
Robust control
Robustness
RVAE
Statistics
Training
VAE
β divergence
Title A robust variational autoencoder using beta divergence
URI https://dx.doi.org/10.1016/j.knosys.2021.107886
https://www.ncbi.nlm.nih.gov/pubmed/36714396
https://www.proquest.com/docview/2638081956
https://www.proquest.com/docview/2771089387
Volume 238
WOSCitedRecordID wos000779159800013&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  customDbUrl:
  eissn: 1872-7409
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0002218
  issn: 0950-7051
  databaseCode: AIEXJ
  dateStart: 19950201
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3da9swEBdpO0Zf9v2RrSse7C14xLYsWS8Dr3RsWSl76EbejCzJsDa1Q2KH7r_f6csxK6Hbw16MLclG3J3ufneW7hB6l0U8TkuahiVY_xCraRLqkSFhpawqnBBm0hf_OKPn59l8zr6NRh_8WZjNgtZ1dnPDlv-V1dAGzNZHZ_-B3f1HoQHugelwBbbD9a8Yn09WTdmt28kG3GAf6uNd2-iUlTpzRGfCA6Vq-UTqXRkmHecQpH71cbZQ2zjpsj334Du_WvFrW-warEsfTp41a1MgWOc5qeU2RnpmGmcDMcx_Sf1__iNx8W-9w0w_nyyGMQhwX7dnun0wcRrSqUsd6_RqbNO23NLRNlxw-f6qbmD-4KLHETSCK06Gw4Goy2vDooToGu3sj4TZ1gS7rj10ENOUgVY7yL-czme9LY5jE-Ht5-cPT5odfrcncIju-0_uwim7_BCDRy4eoQfOkQhyKwCP0UjVT9BDX6QjcDr7KSJ5YOUhGMhDMJCHwMhDoOUh2MrDM_T90-nFyefQFcsIBayoNsxSTESFJZOqKhUF1FviNFKEC1JijtOYEpHEXJG0BEwaYT5lPKoiWVECajiWyXO0Xze1eokCToVkkeIZFykWuOSSJxLc2qzCTLFKjFHiSVMIl0leFzRZFH7L4GVhaVto2haWtmMU9m8tbSaVO8ZTT_XCoUGL8goQpDvePPJMKtzChH4wNBr-ptD9tu8GXap_kPFaNR2MoYC3AcBndIxeWOb2U_Vy8Wpnz2t0uF0bR2i_XXXqDbonNu3P9eoY7dF5duzk8zdzDJZR
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+robust+variational+autoencoder+using+beta+divergence&rft.jtitle=Knowledge-based+systems&rft.au=Akrami%2C+Haleh&rft.au=Joshi%2C+Anand+A&rft.au=Li%2C+Jian&rft.au=Ayd%C3%B6re%2C+Serg%C3%BCl&rft.date=2022-02-28&rft.issn=0950-7051&rft.volume=238&rft_id=info:doi/10.1016%2Fj.knosys.2021.107886&rft_id=info%3Apmid%2F36714396&rft.externalDocID=36714396
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0950-7051&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0950-7051&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0950-7051&client=summon