Learning variational autoencoders via MCMC speed measures
Variational autoencoders (VAEs) are popular likelihood-based generative models which can be efficiently trained by maximising an evidence lower bound. There has been much progress in improving the expressiveness of the variational distribution to obtain tighter variational bounds and increased gener...
Gespeichert in:
| Veröffentlicht in: | Statistics and computing Jg. 34; H. 5 |
|---|---|
| Hauptverfasser: | , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
New York
Springer US
01.10.2024
Springer Nature B.V |
| Schlagworte: | |
| ISSN: | 0960-3174, 1573-1375 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | Variational autoencoders (VAEs) are popular likelihood-based generative models which can be efficiently trained by maximising an evidence lower bound. There has been much progress in improving the expressiveness of the variational distribution to obtain tighter variational bounds and increased generative performance. Whilst previous work has leveraged Markov chain Monte Carlo methods for constructing variational densities, gradient-based methods for adapting the proposal distributions for deep latent variable models have received less attention. This work suggests an entropy-based adaptation for a short-run metropolis-adjusted Langevin or Hamiltonian Monte Carlo (HMC) chain while optimising a tighter variational bound to the log-evidence. Experiments show that this approach yields higher held-out log-likelihoods as well as improved generative metrics. Our implicit variational density can adapt to complicated posterior geometries of latent hierarchical representations arising in hierarchical VAEs. |
|---|---|
| AbstractList | Variational autoencoders (VAEs) are popular likelihood-based generative models which can be efficiently trained by maximising an evidence lower bound. There has been much progress in improving the expressiveness of the variational distribution to obtain tighter variational bounds and increased generative performance. Whilst previous work has leveraged Markov chain Monte Carlo methods for constructing variational densities, gradient-based methods for adapting the proposal distributions for deep latent variable models have received less attention. This work suggests an entropy-based adaptation for a short-run metropolis-adjusted Langevin or Hamiltonian Monte Carlo (HMC) chain while optimising a tighter variational bound to the log-evidence. Experiments show that this approach yields higher held-out log-likelihoods as well as improved generative metrics. Our implicit variational density can adapt to complicated posterior geometries of latent hierarchical representations arising in hierarchical VAEs. |
| ArticleNumber | 164 |
| Author | Hirt, Marcel Kreouzis, Vasileios Dellaportas, Petros |
| Author_xml | – sequence: 1 givenname: Marcel surname: Hirt fullname: Hirt, Marcel organization: School of Social Sciences and School of Physical and Mathematical Sciences, Nanyang Technological University – sequence: 2 givenname: Vasileios surname: Kreouzis fullname: Kreouzis, Vasileios organization: Department of Statistical Science, University College London – sequence: 3 givenname: Petros surname: Dellaportas fullname: Dellaportas, Petros email: p.dellaportas@ucl.ac.uk organization: Department of Statistical Science, University College London, Department of Statistics, Athens University of Economics and Business |
| BookMark | eNp9kMFKw0AQhhepYFt9AU8Bz6s7u8lm9yhBrdDiRc_LJJmUlDapu0mpb29sBMFDT3OY_xv--WZs0rQNMXYL4h6ESB8CgJSSCxlzELEBfrxgU0hSxUGlyYRNhdWCK0jjKzYLYSMEgFbxlNkloW_qZh0d0NfY1W2D2wj7rqWmaEvyITrUGK2yVRaFPVEZ7QhD7ylcs8sKt4FufuecfTw_vWcLvnx7ec0el7xQWnWcRAJSlUbrKgdbJlrJ3KBF0hVQURltcgNVLnBYWmshlrnCVEOSW6xA5WrO7sa7e99-9hQ6t2l7P7QMTgljjZbSJkPKjKnCtyF4qlxRd6d3Oo_11oFwP6LcKMoNotxJlDsOqPyH7n29Q_91HlIjFIZwsyb_1-oM9Q0463zY |
| CitedBy_id | crossref_primary_10_3390_pr13030861 crossref_primary_10_3390_en17205088 |
| Cites_doi | 10.1111/1467-9868.00196 10.1214/088342307000000014 10.24963/ijcai.2017/273 10.1137/21M1450604 10.1007/978-3-030-58539-6_22 10.1007/s10107-007-0149-x 10.1162/NECO_a_00142 10.1017/S0962492902000144 10.1109/CVPR52688.2022.01042 10.1609/aaai.v31i1.10902 10.3150/18-BEJ1083 10.1109/ICCV51070.2023.00393 10.3390/e23030269 10.1017/S0962492917000101 10.1109/FOCS57990.2023.00134 10.1111/j.1467-9868.2009.00736.x |
| ContentType | Journal Article |
| Copyright | The Author(s) 2024 The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
| Copyright_xml | – notice: The Author(s) 2024 – notice: The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
| DBID | C6C AAYXX CITATION JQ2 |
| DOI | 10.1007/s11222-024-10481-x |
| DatabaseName | SpringerOpen CrossRef ProQuest Computer Science Collection |
| DatabaseTitle | CrossRef ProQuest Computer Science Collection |
| DatabaseTitleList | CrossRef ProQuest Computer Science Collection |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Statistics Mathematics Computer Science |
| EISSN | 1573-1375 |
| ExternalDocumentID | 10_1007_s11222_024_10481_x |
| GroupedDBID | -52 -5D -5G -BR -EM -Y2 -~C .86 .DC .VR 06D 0R~ 0VY 123 199 1N0 1SB 2.D 203 28- 29Q 2J2 2JN 2JY 2KG 2KM 2LR 2P1 2VQ 2~H 30V 4.4 406 408 409 40D 40E 5QI 5VS 67Z 6NX 78A 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDZT ABECU ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABLJU ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABWNU ABXPI ACAOD ACBXY ACDTI ACGFS ACHSB ACHXU ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACSNA ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFGCZ AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR AOCGG ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN B-. BA0 BAPOH BBWZM BDATZ BGNMA BSONS C6C CAG COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 EBLON EBS EIOEI EJD ESBYG F5P FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNWQR GQ6 GQ7 GQ8 GXS H13 HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I09 IHE IJ- IKXTQ ITM IWAJR IXC IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ KDC KOV KOW LAK LLZTM M4Y MA- N2Q NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM OVD P19 P2P P9R PF0 PT4 PT5 QOK QOS R4E R89 R9I RHV RIG RNI RNS ROL RPX RSV RZC RZE RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SDD SDH SDM SHX SISQX SJYHP SMT SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TEORI TN5 TSG TSK TSV TUC U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 Z7R Z7U Z7W Z7X Z7Y Z81 Z83 Z87 Z88 Z8O Z8R Z8U Z8W Z91 Z92 ZMTXR ZWQNP ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ABRTQ ACSTC ADHKG ADKFA AEZWR AFDZB AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION JQ2 |
| ID | FETCH-LOGICAL-c363t-e05123d866fb19d5632b8a9ae6f1ecf868b81fb0a19d999142b3a7615b9af13b3 |
| IEDL.DBID | RSV |
| ISICitedReferencesCount | 2 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001284919200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0960-3174 |
| IngestDate | Sun Nov 09 08:18:20 EST 2025 Tue Nov 18 20:47:43 EST 2025 Sat Nov 29 03:32:46 EST 2025 Fri Feb 21 02:40:31 EST 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 5 |
| Keywords | Adaptive MCMC MALA Generative models Hierarchical models HMC |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c363t-e05123d866fb19d5632b8a9ae6f1ecf868b81fb0a19d999142b3a7615b9af13b3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| OpenAccessLink | https://link.springer.com/10.1007/s11222-024-10481-x |
| PQID | 3089862295 |
| PQPubID | 2043829 |
| ParticipantIDs | proquest_journals_3089862295 crossref_citationtrail_10_1007_s11222_024_10481_x crossref_primary_10_1007_s11222_024_10481_x springer_journals_10_1007_s11222_024_10481_x |
| PublicationCentury | 2000 |
| PublicationDate | 2024-10-01 |
| PublicationDateYYYYMMDD | 2024-10-01 |
| PublicationDate_xml | – month: 10 year: 2024 text: 2024-10-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York – name: Dordrecht |
| PublicationTitle | Statistics and computing |
| PublicationTitleAbbrev | Stat Comput |
| PublicationYear | 2024 |
| Publisher | Springer US Springer Nature B.V |
| Publisher_xml | – name: Springer US – name: Springer Nature B.V |
| References | Kuzina, A., Welling, M., Tomczak, J.M.: Alleviating adversarial attacks on variational autoencoders with MCMC. In: Advances in Neural Information Processing Systems (2022) Du, Y., Mordatch, I.: Implicit generation and modeling with energy based models. Advances in Neural Information Processing Systems 32, pp. 3608–3618 (2019) Guo, F., Wang, X., Fan, K., et al: Boosting variational inference (2016). arXiv:1611.05559 MaaløeLFraccaroMLiévinVBiva: a very deep hierarchy of latent variables for generative modelingAdv. Neural. Inf. Process. Syst.20193265516562 Rezende, D., Mohamed, S.: Variational inference with normalizing flows. In: Proceedings of The 32nd International Conference on Machine Learning, pp. 1530–1538 (2015) HagemannPHertrichJSteidlGStochastic normalizing flows for inverse problems: a Markov Chains viewpointSIAM/ASA J. Uncertain. Quantif.202210311621190449032010.1137/21M1450604 Hoffman, M.D., Johnson, M.J.: Elbo surgery: yet another way to carve up the variational evidence lower bound. In: Workshop in Advances in Approximate Bayesian Inference, NIPS (2016) Dilokthanakul, N., Mediano, P.A., Garnelo, M., et al: Deep unsupervised clustering with Gaussian mixture Variational Autoencoders (2016). arXiv:1611.02648 Klushyn, A., Chen, N., Kurle, R., et al: Learning hierarchical priors in VAEs. Advances in Neural Information Processing Systems 32, pp. 2870–2879 (2019) Papamakarios, G., Nalisnick, E., Rezende, D.J., et al: Normalizing flows for probabilistic modeling and inference (2019). arXiv:1912.02762 AmbrosioLGigliNSavaréGGradient flows: in Metric Spaces and in the Space of Probability Measures2005BerlinSpringer HirtMTitsiasMDellaportasPEntropy-based adaptive Hamiltonian Monte CarloAdv. Neural. Inf. Process. Syst.2021342848228495 Finke, A., Thiery, A.H.: On importance-weighted autoencoders (2019). arXiv:1907.10477 Rosca, M., Lakshminarayanan, B., Mohamed, S.: Distribution matching in variational inference (2018). arXiv:1802.06847 Singhal, R., Goldstein, M., Ranganath, R.: Where to diffuse, how to diffuse and how to get back: automated learning in multivariate diffusions. In: International Conference on Learning Representations (2023) LivingstoneSBetancourtMByrneSOn the geometric ergodicity of Hamiltonian Monte CarloBernoulli2019254A31093138400357610.3150/18-BEJ1083 Altschuler, J.M., Chewi, S.: Faster high-accuracy log-concave sampling via algorithmic warm starts. In: 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, pp. 2169–2176 (2023) AnejaJSchwingAKautzJA contrastive learning approach for training Variational Autoencoder priorsAdv. Neural. Inf. Process. Syst.202134480493 Hoffman, M, Sountsov, P., Dillon, J.V., et al: Neutra-lizing bad geometry in Hamiltonian Monte Carlo using neural transport (2019). arXiv:1903.03704 Bou-RabeeNSanz-SernaJMGeometric integrators and the Hamiltonian Monte Carlo methodActa Numer201827113206382650710.1017/S0962492917000101 Nijkamp, E., Pang, B., Han, T., et al: Learning multi-layer latent variable model via variational optimization of short run MCMC for approximate inference. In: European Conference on Computer Vision. Springer, pp. 361–378 (2020) Taniguchi, S., Iwasawa, Y., Kumagai, W., et al: Langevin autoencoders for learning deep latent variable models (2022). arXiv:2209.07036 Durmus, A., Moulines, E., Saksman, E.: On the convergence of Hamiltonian Monte Carlo (2017). arXiv:1705.00166 Barber, D., Bishop, C.M. Ensemble learning for multi-layer networks. In: Advances in Neural Information Processing Systems, pp. 395–401 (1998) Titsias MK (2017) Learning model reparametrizations: implicit variational inference by fitting MCMC distributions. arXiv:1708.01529 WuHKöhlerJNoéFStochastic normalizing flowsAdv. Neural. Inf. Process. Syst.20203359335944 Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems 32, pp. 11918–11930 (2019) Chen, T.Q., Behrmann, J., Duvenaud, D.K., et al: Residual flows for invertible generative modeling. In: Advances in Neural Information Processing Systems, pp. 9913–9923 (2019a) Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., et al: Deep unsupervised learning using nonequilibrium thermodynamics. In: International Conference on Machine Learning, PMLR, pp. 2256–2265 (2015) Ranganath, R., Tran, D., Blei, D.M.: Hierarchical variational models. In: International Conference on Machine Learning (2016) Bińkowski, M., Sutherland, D.J., Arbel, M., et al: Demystifying MMD GANs. (2018) arXiv:1801.01401 Salimans, T., Kingma, D.P., Welling, M., et al: Markov Chain Monte Carlo and variational inference: bridging the gap. In: ICML, pp. 1218–1226 (2015) Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. In: Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 1278–1286 (2014) Thin, A., Kotelevskii, N., Denain, J.S., et al: Metflow: a new efficient method for bridging the gap between Markov Chain Monte Carlo and variational inference (2020). arXiv:2002.12253 DaiBWangYAstonJConnections with robust PCA and the role of emergent sparsity in variational autoencoder modelsJ. Mach. Learn. Res.2018191157316143862448 Chen, Y., Gatmiry, K.: A simple proof of the mixing of metropolis-adjusted langevin algorithm under smoothness and isoperimetry (2023). arXiv:2304.04095 Child, R.: Very deep VAEs generalize autoregressive models and can outperform them on images. In: International Conference on Learning Representations (2021) HoffmanMDGelmanAThe No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte CarloJ. Mach. Learn. Res.2014151159316233214779 Han, T., Lu, Y., Zhu, S.C., et al: Alternating back-propagation for generator network. In: Proceedings of the AAAI Conference on Artificial Intelligence (2017) Tran, D., Ranganath, R., Blei, D.M.: Deep and hierarchical implicit models. arXiv:1702.08896 (2017) Vahdat, A., Kreis, K., Kautz, J.: Score-based generative modeling in latent space. Advances in Neural Information Processing Systems 34 (2021) Ruiz, F., Titsias, M.: A Contrastive divergence for combining variational inference and MCMC. In: International Conference on Machine Learning, pp. 5537–5545 (2019) Lee, Y.T., Shen, R., Tian, K.: Logsmooth gradient concentration and tighter runtimes for metropolized Hamiltonian Monte Carlo. In: Conference on Learning Theory, PMLR, pp. 2565–2597 (2020) Levy, D., Hoffman, M.D., Sohl-Dickstein, J.: Generalizing Hamiltonian Monte Carlo with neural networks. In: International Conference on Learning Representations (2018) Rombach, R., Blattmann, A., Lorenz, D., et al: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684–10695 (2022) Behrmann, J., Grathwohl, W., Chen, R.T., et al: Invertible residual networks. In: International Conference on Machine Learning, pp. 573–582 (2019) Mescheder, L., Nowozin, S., Geiger, A.: Adversarial variational Bayes: unifying variational autoencoders and generative adversarial networks. In: International Conference on Machine learning (ICML) (2017) Falck, F., Williams, C., Danks, D., et al: A multi-resolution framework for U-Nets with applications to hierarchical VAEs. In: Advances in Neural Information Processing Systems (2022) Tran, D., Blei, D., Airoldi, E.M.: Copula variational inference. In: Advances in Neural Information Processing Systems, pp. 3564–3572 (2015) AndrieuCDoucetAHolensteinRParticle Markov Chain Monte Carlo methodsJ. R. Stat. Soc. Ser. B (Stat. Methodol.)2010723269342275811510.1111/j.1467-9868.2009.00736.x Han, S., Liao, X., Dunson, D., et al: Variational Gaussian copula inference. In: Artificial Intelligence and Statistics, pp. 829–838 (2016) Louizos, C., Welling, M.: Structured and efficient variational deep learning with matrix Gaussian posteriors. In: Proceedings of the 33rd International Conference on Machine Learning (2016) Molchanov, D., Kharitonov, V., Sobolev, A., et al: Doubly semi-implicit variational inference. In: The 22nd International Conference on Artificial Intelligence and Statistics, PMLR, pp. 2593–2602 (2019) SinhaASongJMengCD2c: diffusion-decoding models for few-shot conditional generationAdv. Neural. Inf. Process. Syst.2021341253312548 HoJJainAAbbeelPDenoising diffusion probabilistic modelsAdv. Neural. Inf. Process. Syst.20203368406851 Titsias, M., Lázaro-Gredilla, M.: Doubly stochastic variational bayes for non-conjugate inference. In: Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 1971–1979 (2014) RobertsGGelmanAGilksWWeak convergence and optimal scaling of random walk metropolis algorithmsAnn. Appl. Probab.1997711101201428751 WuKSchmidlerSChenYMinimax mixing time of the metropolis-adjusted Langevin algorithm for log-concave samplingJ. Mach. Learn. Res.2022232701634577709 Hernandez-Lobato, J., Li, Y., Rowland, M., et al: Black-box alpha divergence minimization. In: International Conference on Machine Learning, PMLR, pp. 1511–1520 (2016) Kingma, D.P., Welling, M.: Auto-encoding Variational Bayes. In: Proceedings of the 2nd International Conference on Learning Representations (ICLR) (2014) Lucas, J., Tucker, G., Grosse, R.B., et al: Don’t blame the ELBO! a linear VAE perspective on posterior collapse. In: Advances in Neural Information Processing Systems, pp. 9408–9418 (2019) Papaspiliopoulos, O., Roberts, G.O., Sköld, M.: A general framework for the parametrization of hierarchical models. Statistical Science, pp. 59–73 (2007) Vahdat, A., Kautz, J.: NVAE: a deep hierarchical variational autoencoder (2020). arXiv:2007.03898 NesterovYPrimal-dual subgradient methods for convex problemsMath. Program.20091201221259249643410.1007/s10107-007-0149-x Yin, M., Zhou, M.: Semi-implicit variational inference. In: International Conference on Machine Learning, pp. 5646–5655 (2018) Pang, B., Han, T., Nijkamp, E., et al,: Learning latent space energy-based prior model. Advances in Neural 10481_CR30 10481_CR32 10481_CR31 10481_CR2 10481_CR26 10481_CR1 P Hagemann (10481_CR27) 2022; 10 10481_CR29 10481_CR7 10481_CR23 10481_CR6 10481_CR25 10481_CR24 10481_CR8 L Maaløe (10481_CR54) 2019; 32 10481_CR21 10481_CR20 10481_CR16 10481_CR15 10481_CR18 10481_CR12 10481_CR11 10481_CR14 ME Tipping (10481_CR83) 1999; 61 10481_CR13 P Vincent (10481_CR93) 2011; 23 10481_CR19 10481_CR90 L Ambrosio (10481_CR3) 2005 10481_CR10 10481_CR98 10481_CR97 10481_CR92 10481_CR91 10481_CR94 10481_CR89 10481_CR88 S Livingstone (10481_CR49) 2019; 25 10481_CR85 10481_CR84 10481_CR87 10481_CR86 10481_CR81 10481_CR80 10481_CR82 10481_CR77 N Bou-Rabee (10481_CR9) 2018; 27 10481_CR79 Y Nesterov (10481_CR58) 2009; 120 10481_CR74 10481_CR73 10481_CR75 10481_CR70 G Roberts (10481_CR68) 1997; 7 10481_CR72 10481_CR71 10481_CR67 10481_CR66 10481_CR69 K Wu (10481_CR96) 2022; 23 C Andrieu (10481_CR4) 2010; 72 YT Lee (10481_CR45) 2021; 34 CK Sønderby (10481_CR78) 2016; 29 10481_CR63 10481_CR62 10481_CR65 10481_CR64 10481_CR61 10481_CR60 10481_CR59 R Dwivedi (10481_CR22) 2019; 20 10481_CR56 10481_CR55 10481_CR57 E Hairer (10481_CR28) 2003; 12 10481_CR52 J Aneja (10481_CR5) 2021; 34 10481_CR51 10481_CR53 M Hirt (10481_CR33) 2021; 34 10481_CR50 MD Hoffman (10481_CR37) 2014; 15 10481_CR48 10481_CR44 10481_CR47 10481_CR46 J Ho (10481_CR34) 2020; 33 A Sinha (10481_CR76) 2021; 34 10481_CR41 B Dai (10481_CR17) 2018; 19 10481_CR40 H Wu (10481_CR95) 2020; 33 10481_CR43 10481_CR42 10481_CR38 10481_CR39 10481_CR36 10481_CR35 |
| References_xml | – reference: WuHKöhlerJNoéFStochastic normalizing flowsAdv. Neural. Inf. Process. Syst.20203359335944 – reference: Hernandez-Lobato, J., Li, Y., Rowland, M., et al: Black-box alpha divergence minimization. In: International Conference on Machine Learning, PMLR, pp. 1511–1520 (2016) – reference: LivingstoneSBetancourtMByrneSOn the geometric ergodicity of Hamiltonian Monte CarloBernoulli2019254A31093138400357610.3150/18-BEJ1083 – reference: Dockhorn, T., Vahdat, A., Kreis, K.: Score-based generative modeling with critically-damped langevin diffusion. In: International Conference on Learning Representations (2021) – reference: Salimans, T., Kingma, D.P., Welling, M., et al: Markov Chain Monte Carlo and variational inference: bridging the gap. In: ICML, pp. 1218–1226 (2015) – reference: Falck, F., Williams, C., Danks, D., et al: A multi-resolution framework for U-Nets with applications to hierarchical VAEs. In: Advances in Neural Information Processing Systems (2022) – reference: Tran, D., Blei, D., Airoldi, E.M.: Copula variational inference. In: Advances in Neural Information Processing Systems, pp. 3564–3572 (2015) – reference: Levy, D., Hoffman, M.D., Sohl-Dickstein, J.: Generalizing Hamiltonian Monte Carlo with neural networks. In: International Conference on Learning Representations (2018) – reference: Song, Y., Sohl-Dickstein, J., Kingma, D.P., et al: Score-based generative modeling through stochastic differential equations. In: International Conference on Learning Representations (2020) – reference: DaiBWangYAstonJConnections with robust PCA and the role of emergent sparsity in variational autoencoder modelsJ. Mach. Learn. Res.2018191157316143862448 – reference: Vahdat, A., Kreis, K., Kautz, J.: Score-based generative modeling in latent space. Advances in Neural Information Processing Systems 34 (2021) – reference: Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., et al: Deep unsupervised learning using nonequilibrium thermodynamics. In: International Conference on Machine Learning, PMLR, pp. 2256–2265 (2015) – reference: Han, S., Liao, X., Dunson, D., et al: Variational Gaussian copula inference. In: Artificial Intelligence and Statistics, pp. 829–838 (2016) – reference: Salimans, T., Karpathy, A., Chen, X., et al: Pixelcnn++: improving the pixelcnn with discretized logistic mixture likelihood and other modifications. In: International Conference on Learning Representations (2017) – reference: Bińkowski, M., Sutherland, D.J., Arbel, M., et al: Demystifying MMD GANs. (2018) arXiv:1801.01401 – reference: Titsias, M., Dellaportas, P.: Gradient-based adaptive Markov chain Monte Carlo. In: Advances in Neural Information Processing Systems, pp. 15704–15713 (2019) – reference: Abadi, M., Barham, P., Chen, J., et al: Tensorflow: a system for large-scale machine learning. In: 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265–283 (2016) – reference: Hoffman, M, Sountsov, P., Dillon, J.V., et al: Neutra-lizing bad geometry in Hamiltonian Monte Carlo using neural transport (2019). arXiv:1903.03704 – reference: Hoffman, M.D.: Learning deep latent Gaussian models with Markov chain Monte Carlo. In: International Conference on Machine Learning, pp. 1510–1519 (2017) – reference: WuKSchmidlerSChenYMinimax mixing time of the metropolis-adjusted Langevin algorithm for log-concave samplingJ. Mach. Learn. Res.2022232701634577709 – reference: Burda, Y., Grosse, R., Salakhutdinov, R.: Importance weighted autoencoders. (2015) arXiv:1509.00519 – reference: Klushyn, A., Chen, N., Kurle, R., et al: Learning hierarchical priors in VAEs. Advances in Neural Information Processing Systems 32, pp. 2870–2879 (2019) – reference: Rosca, M., Lakshminarayanan, B., Mohamed, S.: Distribution matching in variational inference (2018). arXiv:1802.06847 – reference: AmbrosioLGigliNSavaréGGradient flows: in Metric Spaces and in the Space of Probability Measures2005BerlinSpringer – reference: Du, Y., Mordatch, I.: Implicit generation and modeling with energy based models. Advances in Neural Information Processing Systems 32, pp. 3608–3618 (2019) – reference: AndrieuCDoucetAHolensteinRParticle Markov Chain Monte Carlo methodsJ. R. Stat. Soc. Ser. B (Stat. Methodol.)2010723269342275811510.1111/j.1467-9868.2009.00736.x – reference: Mangoubi, O., Vishnoi, N.K.: Nonconvex sampling with the metropolis-adjusted langevin algorithm. In: Conference on Learning Theory, PMLR, pp. 2259–2293 (2019) – reference: RobertsGGelmanAGilksWWeak convergence and optimal scaling of random walk metropolis algorithmsAnn. Appl. Probab.1997711101201428751 – reference: Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. In: Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 1278–1286 (2014) – reference: Ranganath, R., Tran, D., Blei, D.M.: Hierarchical variational models. In: International Conference on Machine Learning (2016) – reference: Rombach, R., Blattmann, A., Lorenz, D., et al: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684–10695 (2022) – reference: DwivediRChenYWainwrightMJLog-concave sampling: metropolis-hastings algorithms are fastJ. Mach. Learn. Res.2019201831423911408 – reference: Thin, A., Kotelevskii, N., Denain, J.S., et al: Metflow: a new efficient method for bridging the gap between Markov Chain Monte Carlo and variational inference (2020). arXiv:2002.12253 – reference: Tran, D., Ranganath, R., Blei, D.M.: Deep and hierarchical implicit models. arXiv:1702.08896 (2017) – reference: Chen, Y., Gatmiry, K.: A simple proof of the mixing of metropolis-adjusted langevin algorithm under smoothness and isoperimetry (2023). arXiv:2304.04095 – reference: Li, C., Wang, Y., Li, W., et al Forward chi-squared divergence based variational importance sampling (2023). arXiv:2311.02516 – reference: Taniguchi, S., Iwasawa, Y., Kumagai, W., et al: Langevin autoencoders for learning deep latent variable models (2022). arXiv:2209.07036 – reference: Barber, D., Bishop, C.M. Ensemble learning for multi-layer networks. In: Advances in Neural Information Processing Systems, pp. 395–401 (1998) – reference: Li, Z., Chen, Y., Sommer, F.T.P: A neural network MCMC sampler that maximizes proposal entropy (2020). arXiv:2010.03587 – reference: Chewi, S., Lu, C., Ahn, K., et al: Optimal dimension dependence of the metropolis-adjusted langevin algorithm. In: Conference on Learning Theory, PMLR, pp. 1260–1300 (2021) – reference: Locatello, F., Dresdner, G., Khanna, R., et al Boosting black box variational inference. In: Advances in Neural Information Processing Systems, pp. 3401–3411 (2018) – reference: Singhal, R., Goldstein, M., Ranganath, R.: Where to diffuse, how to diffuse and how to get back: automated learning in multivariate diffusions. In: International Conference on Learning Representations (2023) – reference: Kingma, D.P., Welling, M.: Auto-encoding Variational Bayes. In: Proceedings of the 2nd International Conference on Learning Representations (ICLR) (2014) – reference: HoffmanMDGelmanAThe No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte CarloJ. Mach. Learn. Res.2014151159316233214779 – reference: Pandey, K., Mandt, S.: A complete recipe for diffusion generative models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4261–4272 (2023) – reference: LeeYTShenRTianKLower bounds on metropolized sampling methods for well-conditioned distributionsAdv. Neural. Inf. Process. Syst.2021341881218824 – reference: Mescheder, L., Nowozin, S., Geiger, A.: Adversarial variational Bayes: unifying variational autoencoders and generative adversarial networks. In: International Conference on Machine learning (ICML) (2017) – reference: HoJJainAAbbeelPDenoising diffusion probabilistic modelsAdv. Neural. Inf. Process. Syst.20203368406851 – reference: Bou-RabeeNSanz-SernaJMGeometric integrators and the Hamiltonian Monte Carlo methodActa Numer201827113206382650710.1017/S0962492917000101 – reference: Lee, Y.T., Shen, R., Tian, K.: Logsmooth gradient concentration and tighter runtimes for metropolized Hamiltonian Monte Carlo. In: Conference on Learning Theory, PMLR, pp. 2565–2597 (2020) – reference: Han, T., Zhang, J., Wu, Y.N.: From EM-projections to variational auto-encoder. In: NeurIPS 2020 Workshop: Deep Learning through Information Geometry (2020) – reference: Han, T., Lu, Y., Zhu, S.C., et al: Alternating back-propagation for generator network. In: Proceedings of the AAAI Conference on Artificial Intelligence (2017) – reference: Titsias, M., Lázaro-Gredilla, M.: Doubly stochastic variational bayes for non-conjugate inference. In: Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 1971–1979 (2014) – reference: HairerELubichCWannerGGeometric numerical integration illustrated by the Störmer–Verlet methodActa Numer200312399450224915910.1017/S0962492902000144 – reference: Peis, I., Ma, C., Hernández-Lobato, J.M.: Missing data imputation and acquisition with deep hierarchical models and Hamiltonian Monte Carlo (2022). arXiv:2202.04599 – reference: NesterovYPrimal-dual subgradient methods for convex problemsMath. Program.20091201221259249643410.1007/s10107-007-0149-x – reference: Hoffman, M.D., Johnson, M.J.: Elbo surgery: yet another way to carve up the variational evidence lower bound. In: Workshop in Advances in Approximate Bayesian Inference, NIPS (2016) – reference: SønderbyCKRaikoTMaaløeLLadder variational autoencodersAdv. Neural. Inf. Process. Syst.20162937383746 – reference: Guo, F., Wang, X., Fan, K., et al: Boosting variational inference (2016). arXiv:1611.05559 – reference: Durmus, A., Moulines, E., Saksman, E.: On the convergence of Hamiltonian Monte Carlo (2017). arXiv:1705.00166 – reference: Chen, T.Q., Behrmann, J., Duvenaud, D.K., et al: Residual flows for invertible generative modeling. In: Advances in Neural Information Processing Systems, pp. 9913–9923 (2019a) – reference: Nijkamp, E., Pang, B., Han, T., et al: Learning multi-layer latent variable model via variational optimization of short run MCMC for approximate inference. In: European Conference on Computer Vision. Springer, pp. 361–378 (2020) – reference: Pang, B., Han, T., Nijkamp, E., et al,: Learning latent space energy-based prior model. Advances in Neural Information Processing Systems 33, pp.21994–22008 (2020) – reference: SinhaASongJMengCD2c: diffusion-decoding models for few-shot conditional generationAdv. Neural. Inf. Process. Syst.2021341253312548 – reference: Finke, A., Thiery, A.H.: On importance-weighted autoencoders (2019). arXiv:1907.10477 – reference: Vahdat, A., Kautz, J.: NVAE: a deep hierarchical variational autoencoder (2020). arXiv:2007.03898 – reference: AnejaJSchwingAKautzJA contrastive learning approach for training Variational Autoencoder priorsAdv. Neural. Inf. Process. Syst.202134480493 – reference: Louizos, C., Welling, M.: Multiplicative normalizing flows for variational bayesian neural networks. In: International Conference on Machine Learning, pp. 2218–2227 (2017) – reference: Kuzina, A., Welling, M., Tomczak, J.M.: Alleviating adversarial attacks on variational autoencoders with MCMC. In: Advances in Neural Information Processing Systems (2022) – reference: Tomczak, J.M., Welling, M.: VAE with a VampPrior (2017). arXiv:1705.07120 – reference: Yu, L., Xie, T., Zhu, Y., et al: Hierarchical semi-implicit variational iference with application to diffusion model acceleration. In: Thirty-Seventh Conference on Neural Information Processing Systems (2023) – reference: Child, R.: Very deep VAEs generalize autoregressive models and can outperform them on images. In: International Conference on Learning Representations (2021) – reference: Geffner, T., Domke, J.: On the difficulty of unbiased alpha divergence minimization. In: International Conference on Machine Learning, PMLR, pp. 3650–3659 (2021) – reference: Jiang, Z., Zheng, Y., Tan, H., et al: Variational deep embedding: an unsupervised and generative approach to clustering. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence, pp. 1965–1972 (2017) – reference: Ruiz, F., Titsias, M.: A Contrastive divergence for combining variational inference and MCMC. In: International Conference on Machine Learning, pp. 5537–5545 (2019) – reference: MaaløeLFraccaroMLiévinVBiva: a very deep hierarchy of latent variables for generative modelingAdv. Neural. Inf. Process. Syst.20193265516562 – reference: Kingma, D.P., Salimans, T., Jozefowicz, R., et al: Improved variational inference with inverse autoregressive flow. In: Advances in Neural Information Processing Systems, pp. 4743–4751 (2016) – reference: Louizos, C., Welling, M.: Structured and efficient variational deep learning with matrix Gaussian posteriors. In: Proceedings of the 33rd International Conference on Machine Learning (2016) – reference: Molchanov, D., Kharitonov, V., Sobolev, A., et al: Doubly semi-implicit variational inference. In: The 22nd International Conference on Artificial Intelligence and Statistics, PMLR, pp. 2593–2602 (2019) – reference: Dilokthanakul, N., Mediano, P.A., Garnelo, M., et al: Deep unsupervised clustering with Gaussian mixture Variational Autoencoders (2016). arXiv:1611.02648 – reference: Titsias, M.K., Ruiz, F.: Unbiased implicit variational inference. In: The 22nd international conference on artificial intelligence and statistics, pp. 167–176 (2019) – reference: Behrmann, J., Grathwohl, W., Chen, R.T., et al: Invertible residual networks. In: International Conference on Machine Learning, pp. 573–582 (2019) – reference: Rezende, D., Mohamed, S.: Variational inference with normalizing flows. In: Proceedings of The 32nd International Conference on Machine Learning, pp. 1530–1538 (2015) – reference: Titsias MK (2017) Learning model reparametrizations: implicit variational inference by fitting MCMC distributions. arXiv:1708.01529 – reference: Caterini, A.L., Doucet, A., Sejdinovic, D.: Hamiltonian variational auto-encoder. In: Advances in Neural Information Processing Systems, pp. 8167–8177 (2018) – reference: TippingMEBishopCMProbabilistic principal component analysisJ. R. Stat. Soc. Ser. B (Stat. Methodol.)1999613611622170786410.1111/1467-9868.00196 – reference: Papaspiliopoulos, O., Roberts, G.O., Sköld, M.: A general framework for the parametrization of hierarchical models. Statistical Science, pp. 59–73 (2007) – reference: Papamakarios, G., Nalisnick, E., Rezende, D.J., et al: Normalizing flows for probabilistic modeling and inference (2019). arXiv:1912.02762 – reference: Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems 32, pp. 11918–11930 (2019) – reference: Chen, Y., Dwivedi, R., Wainwright, M.J., et al: Fast mixing of metropolized Hamiltonian Monte Carlo: benefits of multi-step gradients (2019b). arXiv:1905.12247 – reference: Lucas, J., Tucker, G., Grosse, R.B., et al: Don’t blame the ELBO! a linear VAE perspective on posterior collapse. In: Advances in Neural Information Processing Systems, pp. 9408–9418 (2019) – reference: HagemannPHertrichJSteidlGStochastic normalizing flows for inverse problems: a Markov Chains viewpointSIAM/ASA J. Uncertain. Quantif.202210311621190449032010.1137/21M1450604 – reference: Altschuler, J.M., Chewi, S.: Faster high-accuracy log-concave sampling via algorithmic warm starts. In: 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, pp. 2169–2176 (2023) – reference: HirtMTitsiasMDellaportasPEntropy-based adaptive Hamiltonian Monte CarloAdv. Neural. Inf. Process. Syst.2021342848228495 – reference: Wolf, C., Karl, M., van der Smagt, P.: Variational inference with Hamiltonian Monte Carlo (2016). arXiv:1609.08203 – reference: Ruiz, F.J., Titsias, M.K., Cemgil, T., et al: Unbiased gradient estimation for variational auto-encoders using coupled Markov chains. In: Uncertainty in Artificial Intelligence, PMLR, pp. 707–717 (2021) – reference: VincentPA connection between score matching and denoising autoencodersNeural Comput.201123716611674283954310.1162/NECO_a_00142 – reference: Yin, M., Zhou, M.: Semi-implicit variational inference. In: International Conference on Machine Learning, pp. 5646–5655 (2018) – ident: 10481_CR90 – volume: 34 start-page: 480 year: 2021 ident: 10481_CR5 publication-title: Adv. Neural. Inf. Process. Syst. – ident: 10481_CR55 – ident: 10481_CR32 – volume: 15 start-page: 1593 issue: 1 year: 2014 ident: 10481_CR37 publication-title: J. Mach. Learn. Res. – ident: 10481_CR52 – ident: 10481_CR75 – volume-title: Gradient flows: in Metric Spaces and in the Space of Probability Measures year: 2005 ident: 10481_CR3 – volume: 23 start-page: 1 issue: 270 year: 2022 ident: 10481_CR96 publication-title: J. Mach. Learn. Res. – volume: 19 start-page: 1573 issue: 1 year: 2018 ident: 10481_CR17 publication-title: J. Mach. Learn. Res. – ident: 10481_CR98 – ident: 10481_CR46 – volume: 34 start-page: 28482 year: 2021 ident: 10481_CR33 publication-title: Adv. Neural. Inf. Process. Syst. – ident: 10481_CR23 – ident: 10481_CR84 – ident: 10481_CR8 – volume: 34 start-page: 12533 year: 2021 ident: 10481_CR76 publication-title: Adv. Neural. Inf. Process. Syst. – volume: 61 start-page: 611 issue: 3 year: 1999 ident: 10481_CR83 publication-title: J. R. Stat. Soc. Ser. B (Stat. Methodol.) doi: 10.1111/1467-9868.00196 – ident: 10481_CR61 – ident: 10481_CR41 – ident: 10481_CR63 doi: 10.1214/088342307000000014 – ident: 10481_CR14 – ident: 10481_CR66 – ident: 10481_CR87 – volume: 33 start-page: 6840 year: 2020 ident: 10481_CR34 publication-title: Adv. Neural. Inf. Process. Syst. – ident: 10481_CR39 doi: 10.24963/ijcai.2017/273 – volume: 10 start-page: 1162 issue: 3 year: 2022 ident: 10481_CR27 publication-title: SIAM/ASA J. Uncertain. Quantif. doi: 10.1137/21M1450604 – ident: 10481_CR35 – ident: 10481_CR72 – volume: 7 start-page: 110 issue: 1 year: 1997 ident: 10481_CR68 publication-title: Ann. Appl. Probab. – ident: 10481_CR20 – volume: 33 start-page: 5933 year: 2020 ident: 10481_CR95 publication-title: Adv. Neural. Inf. Process. Syst. – ident: 10481_CR15 – ident: 10481_CR40 – ident: 10481_CR57 – ident: 10481_CR59 doi: 10.1007/978-3-030-58539-6_22 – volume: 32 start-page: 6551 year: 2019 ident: 10481_CR54 publication-title: Adv. Neural. Inf. Process. Syst. – ident: 10481_CR77 – volume: 120 start-page: 221 issue: 1 year: 2009 ident: 10481_CR58 publication-title: Math. Program. doi: 10.1007/s10107-007-0149-x – ident: 10481_CR82 – ident: 10481_CR21 – ident: 10481_CR29 – ident: 10481_CR91 – ident: 10481_CR12 – ident: 10481_CR74 – volume: 29 start-page: 3738 year: 2016 ident: 10481_CR78 publication-title: Adv. Neural. Inf. Process. Syst. – ident: 10481_CR85 – ident: 10481_CR43 – volume: 34 start-page: 18812 year: 2021 ident: 10481_CR45 publication-title: Adv. Neural. Inf. Process. Syst. – ident: 10481_CR26 – volume: 23 start-page: 1661 issue: 7 year: 2011 ident: 10481_CR93 publication-title: Neural Comput. doi: 10.1162/NECO_a_00142 – ident: 10481_CR65 – ident: 10481_CR88 – ident: 10481_CR13 – volume: 12 start-page: 399 year: 2003 ident: 10481_CR28 publication-title: Acta Numer doi: 10.1017/S0962492902000144 – ident: 10481_CR1 – ident: 10481_CR69 doi: 10.1109/CVPR52688.2022.01042 – ident: 10481_CR71 – ident: 10481_CR94 – ident: 10481_CR30 doi: 10.1609/aaai.v31i1.10902 – ident: 10481_CR36 – ident: 10481_CR79 – ident: 10481_CR80 – ident: 10481_CR42 – volume: 20 start-page: 1 issue: 183 year: 2019 ident: 10481_CR22 publication-title: J. Mach. Learn. Res. – volume: 25 start-page: 3109 issue: 4A year: 2019 ident: 10481_CR49 publication-title: Bernoulli doi: 10.3150/18-BEJ1083 – ident: 10481_CR60 doi: 10.1109/ICCV51070.2023.00393 – ident: 10481_CR48 doi: 10.3390/e23030269 – volume: 27 start-page: 113 year: 2018 ident: 10481_CR9 publication-title: Acta Numer doi: 10.1017/S0962492917000101 – ident: 10481_CR10 – ident: 10481_CR18 – ident: 10481_CR31 – ident: 10481_CR2 doi: 10.1109/FOCS57990.2023.00134 – ident: 10481_CR56 – ident: 10481_CR51 – ident: 10481_CR97 – ident: 10481_CR7 – ident: 10481_CR24 – ident: 10481_CR62 – ident: 10481_CR86 – ident: 10481_CR11 – ident: 10481_CR92 – ident: 10481_CR50 – ident: 10481_CR73 – ident: 10481_CR38 – ident: 10481_CR19 – ident: 10481_CR53 – volume: 72 start-page: 269 issue: 3 year: 2010 ident: 10481_CR4 publication-title: J. R. Stat. Soc. Ser. B (Stat. Methodol.) doi: 10.1111/j.1467-9868.2009.00736.x – ident: 10481_CR6 – ident: 10481_CR67 – ident: 10481_CR25 – ident: 10481_CR44 – ident: 10481_CR16 – ident: 10481_CR64 – ident: 10481_CR89 – ident: 10481_CR70 – ident: 10481_CR81 – ident: 10481_CR47 |
| SSID | ssj0011634 |
| Score | 2.389387 |
| Snippet | Variational autoencoders (VAEs) are popular likelihood-based generative models which can be efficiently trained by maximising an evidence lower bound. There... |
| SourceID | proquest crossref springer |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| SubjectTerms | Artificial Intelligence Computer Science Lower bounds Markov chains Monte Carlo simulation Optimization Original Paper Probability and Statistics in Computer Science Statistical Theory and Methods Statistics and Computing/Statistics Programs |
| Title | Learning variational autoencoders via MCMC speed measures |
| URI | https://link.springer.com/article/10.1007/s11222-024-10481-x https://www.proquest.com/docview/3089862295 |
| Volume | 34 |
| WOSCitedRecordID | wos001284919200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAVX databaseName: SpringerLINK Contemporary 1997-Present customDbUrl: eissn: 1573-1375 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0011634 issn: 0960-3174 databaseCode: RSV dateStart: 19970101 isFulltext: true titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22 providerName: Springer Nature |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3fS8MwED5EfZgPTqfidEoefNPA0rRZ-ijD4YMb4i_2VpI2EUG3sW7DP99Lm24oKuhz0lDucndfcrnvAM4QUtsoMxntKCNcCzNBteWKilBmXGoRMVuw6990BgM5HMa3vigsr167VynJwlOvit0YxjKKMQVdRygZReS4geFOOnO8u39a5g4QYRSkUYjN0cN0Ql8q8_0an8PRCmN-SYsW0aZX_99_7sC2R5fkstwOu7BmRg2oV50biDfkBmz1l2yteQNqDnGWhM17EHvG1WeywGO0vyokaj4bO8pL9-yZLF4U6Xf7XZJPMPaRt_KaMd-Hx97VQ_ea-v4KNOWCz6hBgwx4JoWwmsVZJHigpWPrFpaZ1EohtWRWtxUOOhwZBpqrDkIgHSvLuOYHsD4aj8whEJWxLLBpwE2UIkATKoqV5kLILE2jwPAmsErMSerJx10PjNdkRZvsxJag2JJCbMl7E86X30xK6o1fZ7cq7SXeDPOEt2WMR7YgjppwUWlrNfzzakd_m34MtaBUOG2zFqzPpnNzApvpApU3PS225weuit4n |
| linkProvider | Springer Nature |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3dS8MwED9kCs4HP6bi_MyDbxpYmjZLH2UoitsQnbK3krSJCDrFzuGf76VNNxQV9DlpKHe5u1-Su98BHCKktlFmMtpWRrgWZoJqyxUVocy41CJitmDX77b7fTkcxle-KCyvst2rJ8nCU8-K3RjGMooxBV1HKBlF5DgfYsRyiXzXN3fTtwNEGAVpFGJz9DDt0JfKfL_G53A0w5hfnkWLaHO28r__XIVljy7JSbkd1mDOjBqwUnVuIN6QG7DUm7K15g2oO8RZEjavQ-wZV-_JBI_R_qqQqLfxs6O8dGnPZPKgSK_T65D8BWMfeSqvGfMNuD07HXTOqe-vQFMu-JgaNMiAZ1IIq1mcRYIHWjq2bmGZSa0UUktmdUvhoMORYaC5aiME0rGyjGu-CbXR88hsAVEZywKbBtxEKQI0oaJYaS6EzNI0CgxvAqvEnKSefNz1wHhMZrTJTmwJii0pxJa8N-Fo-s1LSb3x6-zdSnuJN8M84S0Z45EtiKMmHFfamg3_vNr236YfwOL5oNdNuhf9yx2oB6XyaYvtQm38-mb2YCGdoCJf94ut-gEhRuEL |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LSwMxEB5ERfRgtSrWZw7eNNhsdtPsUapFsS2CD7wtySYRQdtia_HnO9lHq6KCeE42LDOZzDeZzDcABwipXWSsoQ1lhW9hJqh2XFERSsOlFhFzGbt-u9Htyvv7-OpDFX_22r1MSeY1DZ6lqTc6Hhh3PC18Y-jXKPoXPEZCySiiyLnQNw3y8fr13SSPgGgjI5BCnI6nTSMsyma-X-Oza5rizS8p0szztCr__-cVWC5QJznJt8kqzNheFSplRwdSGHgVljoTFtdhFRY9Es2JnNcgLphYH8gYw-viCpGo11HfU2H659Bk_KhIp9lpkuEAfSJ5zq8fh-tw2zq7aZ7Tou8CTbngI2rRUANupBBOs9hEggdaehZv4ZhNnRRSS-Z0XeGgx5dhoLlqIDTSsXKMa74Bs71-z24CUYaZwKUBt1GKwE2oKFaaCyFNmkaB5TVgpciTtCAl970xnpIpnbIXW4JiSzKxJW81OJx8M8gpOX6dvVNqMinMc5jwuowxlAviqAZHpeamwz-vtvW36fuwcHXaStoX3cttWAxy3dM624HZ0cur3YX5dIx6fNnLdu07tvDp7w |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Learning+variational+autoencoders+via+MCMC+speed+measures&rft.jtitle=Statistics+and+computing&rft.au=Hirt%2C+Marcel&rft.au=Kreouzis%2C+Vasileios&rft.au=Dellaportas%2C+Petros&rft.date=2024-10-01&rft.pub=Springer+US&rft.issn=0960-3174&rft.eissn=1573-1375&rft.volume=34&rft.issue=5&rft_id=info:doi/10.1007%2Fs11222-024-10481-x&rft.externalDocID=10_1007_s11222_024_10481_x |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0960-3174&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0960-3174&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0960-3174&client=summon |