Autoencoders reloaded
In Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called “auto-associative multilayer perceptrons”) were, in the best case, implementing singular value decomposition (SVD) Golub and Reinsch (Linear algebra,...
Gespeichert in:
| Veröffentlicht in: | Biological cybernetics Jg. 116; H. 4; S. 389 - 406 |
|---|---|
| Hauptverfasser: | , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.08.2022
Springer Nature B.V |
| Schlagworte: | |
| ISSN: | 1432-0770, 0340-1200, 1432-0770 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | In Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called “auto-associative multilayer perceptrons”) were, in the best case, implementing singular value decomposition (SVD) Golub and Reinsch (Linear algebra, Singular value decomposition and least squares solutions, pp 134–151. Springer, 1971), equivalent to principal component analysis (PCA) Hotelling (Educ Psychol 24(6/7):417–441, 1993); Jolliffe (Principal component analysis, springer series in statistics, 2nd edn. Springer, New York ). That is, AE are able to derive the
eigenvalues
that represent the amount of variance covered by each component even with the presence of the nonlinear function (sigmoid-like, or any other nonlinear functions) present on their hidden units. Today, with the renewed interest in “deep neural networks” (DNN), multiple types of (deep) AE are being investigated as an alternative to manifold learning Cayton (Univ California San Diego Tech Rep 12(1–17):1, 2005) for conducting nonlinear feature extraction or fusion, each with its own specific (expected) properties. Many of those AE are currently being developed as powerful, nonlinear encoder–decoder models, or used to generate reduced and discriminant feature sets that are more amenable to different modeling and classification tasks. In this paper, we start by recalling and further clarifying the main conclusions of Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), supporting them by extensive empirical evidences, which were not possible to be provided previously (in 1988), due to the dataset and processing limitations. Upon full understanding of the underlying mechanisms, we show that it remains hard (although feasible) to go beyond the state-of-the-art PCA/SVD techniques for auto-association. Finally, we present a brief overview on different autoencoder models that are mainly in use today and discuss their rationale, relations and application areas. |
|---|---|
| AbstractList | In Bourlard and Kamp (Biol Cybern 59(4):291-294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called "auto-associative multilayer perceptrons") were, in the best case, implementing singular value decomposition (SVD) Golub and Reinsch (Linear algebra, Singular value decomposition and least squares solutions, pp 134-151. Springer, 1971), equivalent to principal component analysis (PCA) Hotelling (Educ Psychol 24(6/7):417-441, 1993); Jolliffe (Principal component analysis, springer series in statistics, 2nd edn. Springer, New York ). That is, AE are able to derive the eigenvalues that represent the amount of variance covered by each component even with the presence of the nonlinear function (sigmoid-like, or any other nonlinear functions) present on their hidden units. Today, with the renewed interest in "deep neural networks" (DNN), multiple types of (deep) AE are being investigated as an alternative to manifold learning Cayton (Univ California San Diego Tech Rep 12(1-17):1, 2005) for conducting nonlinear feature extraction or fusion, each with its own specific (expected) properties. Many of those AE are currently being developed as powerful, nonlinear encoder-decoder models, or used to generate reduced and discriminant feature sets that are more amenable to different modeling and classification tasks. In this paper, we start by recalling and further clarifying the main conclusions of Bourlard and Kamp (Biol Cybern 59(4):291-294, 1998), supporting them by extensive empirical evidences, which were not possible to be provided previously (in 1988), due to the dataset and processing limitations. Upon full understanding of the underlying mechanisms, we show that it remains hard (although feasible) to go beyond the state-of-the-art PCA/SVD techniques for auto-association. Finally, we present a brief overview on different autoencoder models that are mainly in use today and discuss their rationale, relations and application areas.In Bourlard and Kamp (Biol Cybern 59(4):291-294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called "auto-associative multilayer perceptrons") were, in the best case, implementing singular value decomposition (SVD) Golub and Reinsch (Linear algebra, Singular value decomposition and least squares solutions, pp 134-151. Springer, 1971), equivalent to principal component analysis (PCA) Hotelling (Educ Psychol 24(6/7):417-441, 1993); Jolliffe (Principal component analysis, springer series in statistics, 2nd edn. Springer, New York ). That is, AE are able to derive the eigenvalues that represent the amount of variance covered by each component even with the presence of the nonlinear function (sigmoid-like, or any other nonlinear functions) present on their hidden units. Today, with the renewed interest in "deep neural networks" (DNN), multiple types of (deep) AE are being investigated as an alternative to manifold learning Cayton (Univ California San Diego Tech Rep 12(1-17):1, 2005) for conducting nonlinear feature extraction or fusion, each with its own specific (expected) properties. Many of those AE are currently being developed as powerful, nonlinear encoder-decoder models, or used to generate reduced and discriminant feature sets that are more amenable to different modeling and classification tasks. In this paper, we start by recalling and further clarifying the main conclusions of Bourlard and Kamp (Biol Cybern 59(4):291-294, 1998), supporting them by extensive empirical evidences, which were not possible to be provided previously (in 1988), due to the dataset and processing limitations. Upon full understanding of the underlying mechanisms, we show that it remains hard (although feasible) to go beyond the state-of-the-art PCA/SVD techniques for auto-association. Finally, we present a brief overview on different autoencoder models that are mainly in use today and discuss their rationale, relations and application areas. In Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called “auto-associative multilayer perceptrons”) were, in the best case, implementing singular value decomposition (SVD) Golub and Reinsch (Linear algebra, Singular value decomposition and least squares solutions, pp 134–151. Springer, 1971), equivalent to principal component analysis (PCA) Hotelling (Educ Psychol 24(6/7):417–441, 1993); Jolliffe (Principal component analysis, springer series in statistics, 2nd edn. Springer, New York ). That is, AE are able to derive the eigenvalues that represent the amount of variance covered by each component even with the presence of the nonlinear function (sigmoid-like, or any other nonlinear functions) present on their hidden units. Today, with the renewed interest in “deep neural networks” (DNN), multiple types of (deep) AE are being investigated as an alternative to manifold learning Cayton (Univ California San Diego Tech Rep 12(1–17):1, 2005) for conducting nonlinear feature extraction or fusion, each with its own specific (expected) properties. Many of those AE are currently being developed as powerful, nonlinear encoder–decoder models, or used to generate reduced and discriminant feature sets that are more amenable to different modeling and classification tasks. In this paper, we start by recalling and further clarifying the main conclusions of Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), supporting them by extensive empirical evidences, which were not possible to be provided previously (in 1988), due to the dataset and processing limitations. Upon full understanding of the underlying mechanisms, we show that it remains hard (although feasible) to go beyond the state-of-the-art PCA/SVD techniques for auto-association. Finally, we present a brief overview on different autoencoder models that are mainly in use today and discuss their rationale, relations and application areas. In Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called “auto-associative multilayer perceptrons”) were, in the best case, implementing singular value decomposition (SVD) Golub and Reinsch (Linear algebra, Singular value decomposition and least squares solutions, pp 134–151. Springer, 1971), equivalent to principal component analysis (PCA) Hotelling (Educ Psychol 24(6/7):417–441, 1993); Jolliffe (Principal component analysis, springer series in statistics, 2nd edn. Springer, New York ). That is, AE are able to derive the eigenvalues that represent the amount of variance covered by each component even with the presence of the nonlinear function (sigmoid-like, or any other nonlinear functions) present on their hidden units. Today, with the renewed interest in “deep neural networks” (DNN), multiple types of (deep) AE are being investigated as an alternative to manifold learning Cayton (Univ California San Diego Tech Rep 12(1–17):1, 2005) for conducting nonlinear feature extraction or fusion, each with its own specific (expected) properties. Many of those AE are currently being developed as powerful, nonlinear encoder–decoder models, or used to generate reduced and discriminant feature sets that are more amenable to different modeling and classification tasks. In this paper, we start by recalling and further clarifying the main conclusions of Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), supporting them by extensive empirical evidences, which were not possible to be provided previously (in 1988), due to the dataset and processing limitations. Upon full understanding of the underlying mechanisms, we show that it remains hard (although feasible) to go beyond the state-of-the-art PCA/SVD techniques for auto-association. Finally, we present a brief overview on different autoencoder models that are mainly in use today and discuss their rationale, relations and application areas. |
| Author | Bourlard, Hervé Kabil, Selen Hande |
| Author_xml | – sequence: 1 givenname: Hervé surname: Bourlard fullname: Bourlard, Hervé organization: Idiap Research Institute, Ecole polytechnique fédérale de Lausanne (EPFL) – sequence: 2 givenname: Selen Hande orcidid: 0000-0002-2588-4047 surname: Kabil fullname: Kabil, Selen Hande email: selen.kabil@idiap.ch organization: Idiap Research Institute, Ecole polytechnique fédérale de Lausanne (EPFL) |
| BookMark | eNp9kDtPwzAUhS1URB8wsjAhsbAE_IrtLEhVxUuqxAKz5Tg3JVUaFztB4t_jKBWPDh2ubMvfOffcO0WjxjWA0AXBNwRjeRsw5pQmuC-cMZmIIzQhnMWnlHj05z5G0xDWOFI0zU7QmKWSSpaSCTqfd62DxroCfLj0UDtTQHGKjktTBzjbnTP09nD_unhKli-Pz4v5MrE8JW1CBJOGijIneVEKZjNe4pSCtDY3UpSK5NbiAucWUsBGMa54ypUiEHMUouBshu4G322Xb6Cw0LTe1Hrrq43xX9qZSv__aap3vXKfOqNKxlGiwfXOwLuPDkKrN1WwUNemAdcFTYXMKJOZEBG92kPXrvNNHC9SKssUFiqNFB0o610IHsqfMATrfut62LrGffVb17212hPZqjVt5frQVX1YygZpiH2aFfjfVAdU3yj7lcE |
| CitedBy_id | crossref_primary_10_1016_j_prmcm_2025_100639 crossref_primary_10_2514_1_J063080 crossref_primary_10_1016_j_neuroimage_2024_120696 crossref_primary_10_1007_s00521_023_09172_x crossref_primary_10_1109_ACCESS_2024_3425056 crossref_primary_10_1109_TIA_2024_3425805 crossref_primary_10_1007_s00170_024_13932_x crossref_primary_10_1080_14789450_2025_2491357 crossref_primary_10_1002_lpor_202400275 crossref_primary_10_1088_1748_9326_ad0607 crossref_primary_10_1186_s40708_025_00252_3 crossref_primary_10_1016_j_arthro_2024_08_010 crossref_primary_10_1016_j_ifacol_2024_08_407 |
| Cites_doi | 10.1137/0914086 10.1007/BFb0006275 10.1145/1390156.1390294 10.1214/aoms/1177729694 10.1007/978-3-030-01171-0_12 10.1007/978-1-4757-1904-8 10.1109/JAS.2019.1911393 10.1016/j.ijinfomgt.2020.102282 10.1007/978-3-662-39778-7_10 10.1016/0893-6080(89)90014-2 10.1016/j.patcog.2016.07.014 10.1016/j.engappai.2016.02.015 10.3115/v1/P15-1107 10.1007/978-1-4757-3264-1 10.18653/v1/2020.emnlp-demos.6 10.21236/ADA164453 10.1016/j.conb.2004.07.007 10.1016/j.conb.2021.10.011 10.1109/TIP.2010.2103949 10.1007/978-3-642-21735-7_7 10.1145/1553374.1553463 10.1109/ICASSP.1985.1168285 10.1016/j.cobeha.2016.05.012 10.1146/annurev-neuro-090919-022842 10.1007/BF00332918 10.1017/CBO9780511801389 10.1007/BF01397471 10.1109/ICASSP.2014.6854900 10.1007/978-3-642-23783-6_41 10.7551/mitpress/7503.003.0024 10.1016/j.neucom.2020.04.057 10.2307/2531996 10.1016/0893-6080(89)90020-8 10.1109/72.392248 10.1016/j.csda.2004.07.010 10.1037/h0071325 10.1561/9781680836233 10.1126/science.1127647 10.1111/j.2517-6161.1996.tb02080.x |
| ContentType | Journal Article |
| Copyright | The Author(s) 2022 The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. 2022. The Author(s). |
| Copyright_xml | – notice: The Author(s) 2022 – notice: The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: 2022. The Author(s). |
| DBID | C6C AAYXX CITATION 3V. 7QO 7TK 7X7 7XB 88A 88E 88I 8AL 8AO 8FD 8FE 8FG 8FH 8FI 8FJ 8FK ABUWG AEUYN AFKRA ARAPS AZQEC BBNVY BENPR BGLVJ BHPHI CCPQU DWQXO FR3 FYUFA GHDGH GNUQQ H8D HCIFZ JQ2 K7- K9. L7M LK8 M0N M0S M1P M2P M7P P5Z P62 P64 PHGZM PHGZT PJZUB PKEHL PPXIY PQEST PQGLB PQQKQ PQUKI Q9U 7X8 5PM |
| DOI | 10.1007/s00422-022-00937-6 |
| DatabaseName | Springer Nature OA Free Journals CrossRef ProQuest Central (Corporate) Biotechnology Research Abstracts Neurosciences Abstracts Health & Medical Collection ProQuest Central (purchase pre-March 2016) Biology Database (Alumni Edition) Medical Database (Alumni Edition) Science Database (Alumni Edition) Computing Database (Alumni Edition) ProQuest Pharma Collection Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Natural Science Collection Hospital Premium Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni) ProQuest One Sustainability (subscription) ProQuest Central UK/Ireland ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials Biological Science Collection ProQuest Central Technology collection Natural Science Collection ProQuest One Community College ProQuest Central Engineering Research Database Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Central Student Aerospace Database SciTech Premium Collection ProQuest Computer Science Collection Computer Science Database ProQuest Health & Medical Complete (Alumni) Advanced Technologies Database with Aerospace ProQuest Biological Science Collection Computing Database Health & Medical Collection (Alumni Edition) PML(ProQuest Medical Library) Science Database ProQuest Biological Science Database (NC LIVE) ProQuest Advanced Technologies & Aerospace Database (NC LIVE) ProQuest Advanced Technologies & Aerospace Collection Biotechnology and BioEngineering Abstracts ProQuest Central Premium ProQuest One Academic (New) ProQuest Health & Medical Research Collection ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) One Applied & Life Sciences ProQuest One Academic (retired) ProQuest One Academic UKI Edition ProQuest Central Basic MEDLINE - Academic PubMed Central (Full Participant titles) |
| DatabaseTitle | CrossRef Computer Science Database ProQuest Central Student ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection SciTech Premium Collection ProQuest One Applied & Life Sciences ProQuest One Sustainability Health Research Premium Collection Natural Science Collection Health & Medical Research Collection Biological Science Collection ProQuest Central (New) ProQuest Medical Library (Alumni) Advanced Technologies & Aerospace Collection ProQuest Science Journals (Alumni Edition) ProQuest Biological Science Collection ProQuest One Academic Eastern Edition ProQuest Hospital Collection ProQuest Technology Collection Health Research Premium Collection (Alumni) Biological Science Database Neurosciences Abstracts ProQuest Hospital Collection (Alumni) Biotechnology and BioEngineering Abstracts ProQuest Health & Medical Complete ProQuest One Academic UKI Edition Engineering Research Database ProQuest One Academic ProQuest One Academic (New) Technology Collection Technology Research Database ProQuest One Academic Middle East (New) ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest One Health & Nursing ProQuest Natural Science Collection ProQuest Pharma Collection ProQuest Biology Journals (Alumni Edition) ProQuest Central Aerospace Database ProQuest Health & Medical Research Collection Biotechnology Research Abstracts Health and Medicine Complete (Alumni Edition) ProQuest Central Korea Advanced Technologies Database with Aerospace ProQuest Computing ProQuest Central Basic ProQuest Science Journals ProQuest Computing (Alumni Edition) ProQuest SciTech Collection Advanced Technologies & Aerospace Database ProQuest Medical Library ProQuest Central (Alumni) MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic CrossRef Computer Science Database |
| Database_xml | – sequence: 1 dbid: BENPR name: ProQuest Central url: https://www.proquest.com/central sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 1432-0770 |
| EndPage | 406 |
| ExternalDocumentID | PMC9287259 10_1007_s00422_022_00937_6 |
| GrantInformation_xml | – fundername: Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung funderid: http://dx.doi.org/10.13039/501100001711 – fundername: ; |
| GroupedDBID | -4W -56 -5G -BR -EM -Y2 -~C -~X .4S .86 .DC .GJ .VR 06C 06D 0R~ 0VY 1N0 203 23N 28- 29~ 2J2 2JN 2JY 2KG 2KM 2LR 2P1 2VQ 2~H 30V 36B 3SX 3V. 4.4 406 408 409 40D 40E 4P2 53G 5QI 5VS 67N 67Z 6NX 78A 7X7 88A 88E 88I 8AO 8FE 8FG 8FH 8FI 8FJ 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANXM AANZL AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDBF ABDZT ABECU ABFTV ABHLI ABHQN ABIVO ABJNI ABJOX ABKCH ABKTR ABLJU ABMNI ABMQK ABNWP ABPLI ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFS ACGOD ACHSB ACHXU ACIWK ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACPRK ACUHS ACZOJ ADBBV ADHIR ADIMF ADINQ ADKNI ADKPE ADMLS ADRFC ADTPH ADURQ ADYFF ADYPR ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEUYN AEVLU AEXYK AFBBN AFEXP AFGCZ AFKRA AFLOW AFQWF AFRAH AFWTZ AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHMBA AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ AKMHD ALIPV ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AOCGG ARAPS ARCSS ARMRJ ASPBG AVWKF AXYYD AZFZN AZQEC B-. B0M BA0 BBNVY BBWZM BDATZ BENPR BGLVJ BGNMA BHPHI BPHCQ BSONS BVXVI C6C CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 DWQXO EAD EAP EBC EBD EBLON EBS ECS EDH EDO EIOEI EJD EMB EMK EMOBN EN4 EPAXT EPL ESBYG EST ESX F5P FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRRFC FSGXE FWDCC FYUFA G-Y G-Z GGCAI GGRSB GJIRD GNUQQ GNWQR GQ6 GQ7 GQ8 GXS H13 HCIFZ HF~ HG5 HG6 HMCUK HMJXF HQYDN HRMNR HVGLF HZ~ I-F I09 IHE IJ- IKXTQ ITM IWAJR IXC IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ K6V K7- KDC KOV KOW KPH LAS LK8 LLZTM M0L M0N M1P M2P M4Y M7P MA- MK~ N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM OVD P19 P62 PF0 PQQKQ PROAC PSQYO PT4 PT5 Q2X QOK QOR QOS R4E R89 R9I RHV RIG RNI RNS ROL RPX RRX RSV RZK S16 S1Z S26 S27 S28 S3A S3B SAP SBL SBY SCLPG SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW SSXJD STPWE SV3 SZN T13 T16 TEORI TN5 TSG TSK TSV TUC TUS U2A U9L UG4 UKHRP UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WH7 WJK WK6 WK8 YLTOR Z45 Z7R Z7X Z7Z Z83 Z88 Z8M Z8R Z8T Z8W Z92 ZMTXR ZOVNA ZXP ~8M ~EX ~KM AAPKM AAYXX ABBRH ABDBE ABFSG ABRTQ ACSTC ADHKG AEZWR AFDZB AFFHD AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA BANNL CITATION PHGZM PHGZT PJZUB PPXIY PQGLB 7QO 7TK 7XB 8AL 8FD 8FK FR3 H8D JQ2 K9. L7M P64 PKEHL PQEST PQUKI Q9U 7X8 PUEGO 5PM |
| ID | FETCH-LOGICAL-c451t-1637a26fb1bdf63c94f052e7ccba76f81bcc0d0bce5e0a8348454881e092d6d43 |
| IEDL.DBID | 7X7 |
| ISICitedReferencesCount | 17 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000814062200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1432-0770 0340-1200 |
| IngestDate | Tue Nov 04 02:01:12 EST 2025 Thu Oct 02 09:55:06 EDT 2025 Thu Nov 06 14:34:46 EST 2025 Tue Nov 18 22:18:34 EST 2025 Sat Nov 29 06:30:22 EST 2025 Fri Feb 21 02:45:27 EST 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 4 |
| Keywords | Deep neural networks Autoencoders Auto-associative multilayer perceptrons Principal component analysis Singular value decomposition |
| Language | English |
| License | Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c451t-1637a26fb1bdf63c94f052e7ccba76f81bcc0d0bce5e0a8348454881e092d6d43 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 Communicated by Benjamin Lindner. |
| ORCID | 0000-0002-2588-4047 |
| OpenAccessLink | https://link.springer.com/10.1007/s00422-022-00937-6 |
| PMID | 35727351 |
| PQID | 2689980685 |
| PQPubID | 54056 |
| PageCount | 18 |
| ParticipantIDs | pubmedcentral_primary_oai_pubmedcentral_nih_gov_9287259 proquest_miscellaneous_2679237966 proquest_journals_2689980685 crossref_primary_10_1007_s00422_022_00937_6 crossref_citationtrail_10_1007_s00422_022_00937_6 springer_journals_10_1007_s00422_022_00937_6 |
| PublicationCentury | 2000 |
| PublicationDate | 2022-08-01 |
| PublicationDateYYYYMMDD | 2022-08-01 |
| PublicationDate_xml | – month: 08 year: 2022 text: 2022-08-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | Berlin/Heidelberg |
| PublicationPlace_xml | – name: Berlin/Heidelberg – name: Heidelberg |
| PublicationSubtitle | Advances in Computational Neuroscience and in Control and Information Theory for Biological Systems |
| PublicationTitle | Biological cybernetics |
| PublicationTitleAbbrev | Biol Cybern |
| PublicationYear | 2022 |
| Publisher | Springer Berlin Heidelberg Springer Nature B.V |
| Publisher_xml | – name: Springer Berlin Heidelberg – name: Springer Nature B.V |
| References | Baldi P (2012) Autoencoders, unsupervised learning, and deep architectures. In: Proceedings of ICML workshop on unsupervised and transfer learning. JMLR Workshop and Conference Proceedings, pp 37–49 Mairal J, Bach F, Ponce J, Sapiro G (2009) Online dictionary learning for sparse coding. In: Proceedings of the 26th annual international conference on machine learning, pp 689–696 Ng A (2011) Cs294a lecture notes–sparse autoencoder. https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. Adv Neural Inf Process Syst vol 27 LuXTsaoYMatsudaSHoriCSpeech enhancement based on deep denoising autoencoderInterspeech20132013436440 Rumelhart DE, Hinton GE, Williams RJ (1985) Learning internal representations by error propagation. California Univ San Diego La Jolla Inst for Cognitive Science, Tech. Rep De LeeuwJPrincipal component analysis of binary data by iterated singular value decompositionComput Stat Data Anal2006501213910.1016/j.csda.2004.07.010 JolliffeIPrincipal component analysis, springer series in statistics19862New YorkSpringer10.1007/978-1-4757-1904-8 TibshiraniRRegression shrinkage and selection via the lassoJ Roy Stat Soc: Ser B (Methodol)1996581267288 Morgan N, Bourlard H (1990) Generalization and parameter estimation in feedforward nets: some experiments. In: Advances in neural information processing systems 2. Morgan Kaufmann, pp 630–637 Schein AI, Saul LK, Ungar LH (2003) A generalized linear model for principal component analysis of binary data. In: International workshop on artificial intelligence and statistics. PMLR, pp 240–247 FukaiTAsabukiTHagaTNeural mechanisms for learning hierarchical structures of informationCurr Opin Neurobiol2021701451531:CAS:528:DC%2BB3MXisVOksrbM10.1016/j.conb.2021.10.011 HornRAJohnsonCRMatrix analysis20132CambridgeCambridge University Press Krzanowski W (1987) Cross-validation in principal component analysis. Biometrics, pp 575–584 Refinetti M, Goldt S (2022) The dynamics of representation learning in shallow, non-linear autoencoders. arXiv preprint arXiv:2201.02115 Gutiérrez L, Keith B (2018) A systematic literature review on word embeddings. In: International conference on software process improvement. Springer, pp 132–141 MageeJCGrienbergerCSynaptic plasticity forms and functionsAnnu Rev Neurosci202043951171:CAS:528:DC%2BB3cXjsVSktLc%3D10.1146/annurev-neuro-090919-022842 Makhzani A, Shlens J, Jaitly N, Goodfellow I, Frey B (2015) Adversarial autoencoders. arXiv preprint arXiv:1511.05644 WienerNCybernetics or control and communication in the animal and the machine1948CambridgeMIT Press KullbackSLeiblerRAOn information and sufficiencyAnn Math Stat1951221798610.1214/aoms/1177729694 Vandewalle J, Staar J, Moor BD, Lauwers J (1984) An adaptive singular value decomposition algorithm and its application to adaptive realization Springer, Berlin, vol 63 Qi Y, Wang Y, Zheng X, Wu Z (2014) Robust feature learning by stacked autoencoder with maximum correntropy criterion. In: 2014 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 6716–6720 CaytonLAlgorithms for manifold learningUniv California San Diego Tech Rep2005121–171 OlshausenBAFieldDJSparse coding of sensory inputsCurr Opin Neurobiol20041444814871:CAS:528:DC%2BD2cXmsVKmtr8%3D10.1016/j.conb.2004.07.007 Povey D, Ghoshal A, Boulianne G, Burget L, Glembek O, Goel N, Hannemann M, Motlicek P, Qian Y, Schwarz P et al (2011) The kaldi speech recognition toolkit. In: IEEE 2011 workshop on automatic speech recognition and understanding, no. IEEE Signal Processing Society, CONF Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedy layer-wise training of deep networks. In: Advances in neural information processing systems, pp 153–160 Srivastava N, Mansimov E, Salakhudinov R (2015) Unsupervised learning of video representations using lstms. In: International conference on machine learning. PMLR, pp 843–852 LevyOGoldbergYNeural word embedding as implicit matrix factorizationAdv Neural Inf Process Syst20142721772185 Golub G, Reinsch C (1971) Linear algebra, Singular value decomposition and least squares solutions, pp 134–151. Springer Xie J, Xu L, Chen E (2012) Image denoising and inpainting with deep neural networks. In: Adv Neural Inf Process Syst, pp 341–349 HeRHuB-GZhengW-SKongX-WRobust principal component analysis based on maximum correntropy criterionIEEE Trans Image Process20112061485149410.1109/TIP.2010.2103949 HotellingHAnalysis of a complex of statistical variables into principal componentsJ Educ Psychol1933246/741744110.1037/h0071325 StewartGIntroduction to matrix computation1973New-YorkAcademic Press Kingma DP, Welling M (2019) An introduction to variational autoencoders. arXiv preprint arXiv:1906.02691 Rifai S, Mesnil G, Vincent P, Muller X, Bengio Y, Dauphin Y, Glorot X (2011) Higher order contractive auto-encoder. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, pp 645–660 XiongPWangHLiuMZhouSHouZLiuXEcg signal enhancement based on improved denoising auto-encoderEng Appl Artif Intell20165219420210.1016/j.engappai.2016.02.015 HansenPCO’LearyDPThe use of the l-curve in the regularization of discrete ill-posed problemsSIAM J Sci Comput19931461487150310.1137/0914086 BaldiPHornikKNeural networks and principal component analysis: Learning from examples without local minimaNeural Netw198921535810.1016/0893-6080(89)90014-2 BunchJNielsenCUpdating the singular value decompositionNum Math19783111112910.1007/BF01397471 NguyenHTranKPThomasseySHamadMForecasting and anomaly detection approaches using lstm and lstm autoencoder techniques with the applications in supply chain managementInt J Inf Manage20215710.1016/j.ijinfomgt.2020.102282 CristianiniNShawe-TaylorJAn introduction to support vector machines and other kernel-based learning methods2000CambrigdeCambridge University Press10.1017/CBO9780511801389 Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. science, 313(5786), 504–507 Masci J, Meier U, Cireşan D, Schmidhuber J (2011) Stacked convolutional auto-encoders for hierarchical feature extraction. In: International conference on artificial neural networks. Springer, pp 52–59 Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, Cistac P, Rault T, Louf R, Funtowicz M et al. (2020) Transformers: state-of-the-art natural language processing. In: Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pp 38–45 AshbyWRAn introduction to cybernetics1961New YorkChapman & Hall Ltd BaldiPFHornikKLearning in linear neural networks: a surveyIEEE Trans Neural Netw1995648378581:STN:280:DC%2BD1c7gsFeitw%3D%3D10.1109/72.392248 Vapnik V (1999) The nature of statistical learning theory. Springer Science & Business Media, Berlin PrincipiERossettiDSquartiniSPiazzaFUnsupervised electric motor fault detection by using deep autoencodersIEEE/CAA J Automatica Sinica20196244145110.1109/JAS.2019.1911393 BreaJGerstnerWDoes computational neuroscience need new synaptic learning paradigms?Curr Opin Behav Sci201611616610.1016/j.cobeha.2016.05.012 LuG-FZouJWangYWangZL1-norm-based principal component analysis with adaptive regularizationPattern Recogn20166090190710.1016/j.patcog.2016.07.014 Bourlard H, Kamp Y, Wellekens C (1985) Speaker dependent connected speech recognition via phonetic markov models. In: ICASSP’85. IEEE international conference on acoustics, speech, and signal processing, vol 10. IEEE, pp 1213–1216 Ben-HurAHornDSiegelmannHTVapnikVSupport vector clusteringJ Mach Learn Res20022125137 Li J, Luong M-T, Jurafsky D (2015) A hierarchical neural autoencoder for paragraphs and documents. arXiv preprint arXiv:1506.01057 Vincent P, Larochelle H, Bengio Y, Manzagol P (2008) Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th international conference on machine learning, pp 1096–1103 Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. In: Advances in neural information processing systems, pp 3104–3112 CharteDCharteFdel JesusMJHerreraFAn analysis on the use of autoencoders for representation learning: Fundamentals, learning task case studies, explainability and challengesNeurocomputing20204049310710.1016/j.neucom.2020.04.057 Zou W, Socher R, Cer D, Manning C (2013) Bilingual word embeddings for phrase-based machine translation. In: Proceedings of the 2013 conference on empirical methods in natural language processing, pp 1393–1398 DosovitskiyABroxTGenerating images with perceptual similarity metrics based on deep networksAdv Neural Inf Process Syst201629658666 ChenSDonohoDSaundersMAtomic decomposition by basis pursuitSIAM J Sci Comput2001431129159 Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT press, Cambridge Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 GolubGVan LoanCMatrix computation1983OxfordOxford Academic Press BourlardHKampYAuto-association by multilayer perceptrons and singular value decompositionBiol Cybern19885942912941:STN:280:DyaL1M%2Fmt1GksQ%3D%3D10.1007/BF00332918 Guo Z, Yue H, Wang H (2004) A modified pca based on the minimum error entropy. In: Proceedings of the 2004 American control conference, vol 4. IEEE, pp 3800–3801 HornikKStinchcombeMWhiteHMultilayer feedforward networks are universal approximatorsNeural Netw19892535936610.1016/0893-6080(89)90020-8 937_CR39 937_CR32 937_CR35 G Golub (937_CR18) 1983 R Tibshirani (937_CR57) 1996; 58 E Principi (937_CR48) 2019; 6 S Chen (937_CR13) 2001; 43 H Hotelling (937_CR29) 1933; 24 J De Leeuw (937_CR15) 2006; 50 P Xiong (937_CR64) 2016; 52 G Stewart (937_CR55) 1973 S Kullback (937_CR33) 1951; 22 937_CR31 G-F Lu (937_CR37) 2016; 60 X Lu (937_CR36) 2013; 2013 937_CR26 JC Magee (937_CR38) 2020; 43 O Levy (937_CR34) 2014; 27 937_CR22 937_CR21 937_CR65 J Brea (937_CR9) 2016; 11 937_CR23 P Baldi (937_CR3) 1989; 2 WR Ashby (937_CR1) 1961 L Cayton (937_CR11) 2005; 12 RA Horn (937_CR27) 2013 937_CR62 937_CR5 937_CR8 937_CR20 937_CR63 937_CR2 H Nguyen (937_CR45) 2021; 57 A Dosovitskiy (937_CR16) 2016; 29 937_CR60 BA Olshausen (937_CR46) 2004; 14 937_CR59 937_CR58 937_CR54 K Hornik (937_CR28) 1989; 2 937_CR56 937_CR19 N Cristianini (937_CR14) 2000 PC Hansen (937_CR24) 1993; 14 937_CR51 J Bunch (937_CR10) 1978; 31 937_CR50 937_CR53 937_CR52 N Wiener (937_CR61) 1948 937_CR47 937_CR49 H Bourlard (937_CR7) 1988; 59 937_CR44 937_CR43 T Fukai (937_CR17) 2021; 70 D Charte (937_CR12) 2020; 404 A Ben-Hur (937_CR6) 2002; 2 R He (937_CR25) 2011; 20 937_CR40 937_CR42 937_CR41 I Jolliffe (937_CR30) 1986 PF Baldi (937_CR4) 1995; 6 |
| References_xml | – reference: LevyOGoldbergYNeural word embedding as implicit matrix factorizationAdv Neural Inf Process Syst20142721772185 – reference: Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT press, Cambridge – reference: LuG-FZouJWangYWangZL1-norm-based principal component analysis with adaptive regularizationPattern Recogn20166090190710.1016/j.patcog.2016.07.014 – reference: FukaiTAsabukiTHagaTNeural mechanisms for learning hierarchical structures of informationCurr Opin Neurobiol2021701451531:CAS:528:DC%2BB3MXisVOksrbM10.1016/j.conb.2021.10.011 – reference: Schein AI, Saul LK, Ungar LH (2003) A generalized linear model for principal component analysis of binary data. In: International workshop on artificial intelligence and statistics. PMLR, pp 240–247 – reference: Ng A (2011) Cs294a lecture notes–sparse autoencoder. https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf – reference: Zou W, Socher R, Cer D, Manning C (2013) Bilingual word embeddings for phrase-based machine translation. In: Proceedings of the 2013 conference on empirical methods in natural language processing, pp 1393–1398 – reference: De LeeuwJPrincipal component analysis of binary data by iterated singular value decompositionComput Stat Data Anal2006501213910.1016/j.csda.2004.07.010 – reference: Vandewalle J, Staar J, Moor BD, Lauwers J (1984) An adaptive singular value decomposition algorithm and its application to adaptive realization Springer, Berlin, vol 63 – reference: BaldiPHornikKNeural networks and principal component analysis: Learning from examples without local minimaNeural Netw198921535810.1016/0893-6080(89)90014-2 – reference: Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. In: Advances in neural information processing systems, pp 3104–3112 – reference: Povey D, Ghoshal A, Boulianne G, Burget L, Glembek O, Goel N, Hannemann M, Motlicek P, Qian Y, Schwarz P et al (2011) The kaldi speech recognition toolkit. In: IEEE 2011 workshop on automatic speech recognition and understanding, no. IEEE Signal Processing Society, CONF – reference: TibshiraniRRegression shrinkage and selection via the lassoJ Roy Stat Soc: Ser B (Methodol)1996581267288 – reference: HornikKStinchcombeMWhiteHMultilayer feedforward networks are universal approximatorsNeural Netw19892535936610.1016/0893-6080(89)90020-8 – reference: CristianiniNShawe-TaylorJAn introduction to support vector machines and other kernel-based learning methods2000CambrigdeCambridge University Press10.1017/CBO9780511801389 – reference: BreaJGerstnerWDoes computational neuroscience need new synaptic learning paradigms?Curr Opin Behav Sci201611616610.1016/j.cobeha.2016.05.012 – reference: HansenPCO’LearyDPThe use of the l-curve in the regularization of discrete ill-posed problemsSIAM J Sci Comput19931461487150310.1137/0914086 – reference: Rifai S, Mesnil G, Vincent P, Muller X, Bengio Y, Dauphin Y, Glorot X (2011) Higher order contractive auto-encoder. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, pp 645–660 – reference: LuXTsaoYMatsudaSHoriCSpeech enhancement based on deep denoising autoencoderInterspeech20132013436440 – reference: GolubGVan LoanCMatrix computation1983OxfordOxford Academic Press – reference: BunchJNielsenCUpdating the singular value decompositionNum Math19783111112910.1007/BF01397471 – reference: Golub G, Reinsch C (1971) Linear algebra, Singular value decomposition and least squares solutions, pp 134–151. Springer – reference: Bourlard H, Kamp Y, Wellekens C (1985) Speaker dependent connected speech recognition via phonetic markov models. In: ICASSP’85. IEEE international conference on acoustics, speech, and signal processing, vol 10. IEEE, pp 1213–1216 – reference: Vapnik V (1999) The nature of statistical learning theory. Springer Science & Business Media, Berlin – reference: Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 – reference: MageeJCGrienbergerCSynaptic plasticity forms and functionsAnnu Rev Neurosci202043951171:CAS:528:DC%2BB3cXjsVSktLc%3D10.1146/annurev-neuro-090919-022842 – reference: Kingma DP, Welling M (2019) An introduction to variational autoencoders. arXiv preprint arXiv:1906.02691 – reference: WienerNCybernetics or control and communication in the animal and the machine1948CambridgeMIT Press – reference: XiongPWangHLiuMZhouSHouZLiuXEcg signal enhancement based on improved denoising auto-encoderEng Appl Artif Intell20165219420210.1016/j.engappai.2016.02.015 – reference: JolliffeIPrincipal component analysis, springer series in statistics19862New YorkSpringer10.1007/978-1-4757-1904-8 – reference: CharteDCharteFdel JesusMJHerreraFAn analysis on the use of autoencoders for representation learning: Fundamentals, learning task case studies, explainability and challengesNeurocomputing20204049310710.1016/j.neucom.2020.04.057 – reference: Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. science, 313(5786), 504–507 – reference: Krzanowski W (1987) Cross-validation in principal component analysis. Biometrics, pp 575–584 – reference: ChenSDonohoDSaundersMAtomic decomposition by basis pursuitSIAM J Sci Comput2001431129159 – reference: HeRHuB-GZhengW-SKongX-WRobust principal component analysis based on maximum correntropy criterionIEEE Trans Image Process20112061485149410.1109/TIP.2010.2103949 – reference: Ben-HurAHornDSiegelmannHTVapnikVSupport vector clusteringJ Mach Learn Res20022125137 – reference: HornRAJohnsonCRMatrix analysis20132CambridgeCambridge University Press – reference: Baldi P (2012) Autoencoders, unsupervised learning, and deep architectures. In: Proceedings of ICML workshop on unsupervised and transfer learning. JMLR Workshop and Conference Proceedings, pp 37–49 – reference: StewartGIntroduction to matrix computation1973New-YorkAcademic Press – reference: Li J, Luong M-T, Jurafsky D (2015) A hierarchical neural autoencoder for paragraphs and documents. arXiv preprint arXiv:1506.01057 – reference: Srivastava N, Mansimov E, Salakhudinov R (2015) Unsupervised learning of video representations using lstms. In: International conference on machine learning. PMLR, pp 843–852 – reference: Makhzani A, Shlens J, Jaitly N, Goodfellow I, Frey B (2015) Adversarial autoencoders. arXiv preprint arXiv:1511.05644 – reference: Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedy layer-wise training of deep networks. In: Advances in neural information processing systems, pp 153–160 – reference: Guo Z, Yue H, Wang H (2004) A modified pca based on the minimum error entropy. In: Proceedings of the 2004 American control conference, vol 4. IEEE, pp 3800–3801 – reference: Gutiérrez L, Keith B (2018) A systematic literature review on word embeddings. In: International conference on software process improvement. Springer, pp 132–141 – reference: BourlardHKampYAuto-association by multilayer perceptrons and singular value decompositionBiol Cybern19885942912941:STN:280:DyaL1M%2Fmt1GksQ%3D%3D10.1007/BF00332918 – reference: OlshausenBAFieldDJSparse coding of sensory inputsCurr Opin Neurobiol20041444814871:CAS:528:DC%2BD2cXmsVKmtr8%3D10.1016/j.conb.2004.07.007 – reference: KullbackSLeiblerRAOn information and sufficiencyAnn Math Stat1951221798610.1214/aoms/1177729694 – reference: PrincipiERossettiDSquartiniSPiazzaFUnsupervised electric motor fault detection by using deep autoencodersIEEE/CAA J Automatica Sinica20196244145110.1109/JAS.2019.1911393 – reference: NguyenHTranKPThomasseySHamadMForecasting and anomaly detection approaches using lstm and lstm autoencoder techniques with the applications in supply chain managementInt J Inf Manage20215710.1016/j.ijinfomgt.2020.102282 – reference: Masci J, Meier U, Cireşan D, Schmidhuber J (2011) Stacked convolutional auto-encoders for hierarchical feature extraction. In: International conference on artificial neural networks. Springer, pp 52–59 – reference: Qi Y, Wang Y, Zheng X, Wu Z (2014) Robust feature learning by stacked autoencoder with maximum correntropy criterion. In: 2014 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 6716–6720 – reference: BaldiPFHornikKLearning in linear neural networks: a surveyIEEE Trans Neural Netw1995648378581:STN:280:DC%2BD1c7gsFeitw%3D%3D10.1109/72.392248 – reference: AshbyWRAn introduction to cybernetics1961New YorkChapman & Hall Ltd – reference: Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. Adv Neural Inf Process Syst vol 27 – reference: Mairal J, Bach F, Ponce J, Sapiro G (2009) Online dictionary learning for sparse coding. In: Proceedings of the 26th annual international conference on machine learning, pp 689–696 – reference: Rumelhart DE, Hinton GE, Williams RJ (1985) Learning internal representations by error propagation. California Univ San Diego La Jolla Inst for Cognitive Science, Tech. Rep – reference: Morgan N, Bourlard H (1990) Generalization and parameter estimation in feedforward nets: some experiments. In: Advances in neural information processing systems 2. Morgan Kaufmann, pp 630–637 – reference: Refinetti M, Goldt S (2022) The dynamics of representation learning in shallow, non-linear autoencoders. arXiv preprint arXiv:2201.02115 – reference: Xie J, Xu L, Chen E (2012) Image denoising and inpainting with deep neural networks. In: Adv Neural Inf Process Syst, pp 341–349 – reference: HotellingHAnalysis of a complex of statistical variables into principal componentsJ Educ Psychol1933246/741744110.1037/h0071325 – reference: Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, Cistac P, Rault T, Louf R, Funtowicz M et al. (2020) Transformers: state-of-the-art natural language processing. In: Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pp 38–45 – reference: Vincent P, Larochelle H, Bengio Y, Manzagol P (2008) Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th international conference on machine learning, pp 1096–1103 – reference: CaytonLAlgorithms for manifold learningUniv California San Diego Tech Rep2005121–171 – reference: DosovitskiyABroxTGenerating images with perceptual similarity metrics based on deep networksAdv Neural Inf Process Syst201629658666 – volume: 14 start-page: 1487 issue: 6 year: 1993 ident: 937_CR24 publication-title: SIAM J Sci Comput doi: 10.1137/0914086 – ident: 937_CR58 doi: 10.1007/BFb0006275 – ident: 937_CR44 – ident: 937_CR60 doi: 10.1145/1390156.1390294 – ident: 937_CR63 – volume-title: Introduction to matrix computation year: 1973 ident: 937_CR55 – volume: 22 start-page: 79 issue: 1 year: 1951 ident: 937_CR33 publication-title: Ann Math Stat doi: 10.1214/aoms/1177729694 – ident: 937_CR50 – ident: 937_CR23 doi: 10.1007/978-3-030-01171-0_12 – volume-title: Principal component analysis, springer series in statistics year: 1986 ident: 937_CR30 doi: 10.1007/978-1-4757-1904-8 – volume: 6 start-page: 441 issue: 2 year: 2019 ident: 937_CR48 publication-title: IEEE/CAA J Automatica Sinica doi: 10.1109/JAS.2019.1911393 – volume: 57 year: 2021 ident: 937_CR45 publication-title: Int J Inf Manage doi: 10.1016/j.ijinfomgt.2020.102282 – ident: 937_CR53 – ident: 937_CR19 doi: 10.1007/978-3-662-39778-7_10 – volume: 27 start-page: 2177 year: 2014 ident: 937_CR34 publication-title: Adv Neural Inf Process Syst – ident: 937_CR47 – volume: 2 start-page: 53 issue: 1 year: 1989 ident: 937_CR3 publication-title: Neural Netw doi: 10.1016/0893-6080(89)90014-2 – volume: 60 start-page: 901 year: 2016 ident: 937_CR37 publication-title: Pattern Recogn doi: 10.1016/j.patcog.2016.07.014 – ident: 937_CR20 – volume: 52 start-page: 194 year: 2016 ident: 937_CR64 publication-title: Eng Appl Artif Intell doi: 10.1016/j.engappai.2016.02.015 – ident: 937_CR35 doi: 10.3115/v1/P15-1107 – ident: 937_CR59 doi: 10.1007/978-1-4757-3264-1 – ident: 937_CR62 doi: 10.18653/v1/2020.emnlp-demos.6 – volume: 29 start-page: 658 year: 2016 ident: 937_CR16 publication-title: Adv Neural Inf Process Syst – volume-title: Cybernetics or control and communication in the animal and the machine year: 1948 ident: 937_CR61 – ident: 937_CR52 doi: 10.21236/ADA164453 – volume: 14 start-page: 481 issue: 4 year: 2004 ident: 937_CR46 publication-title: Curr Opin Neurobiol doi: 10.1016/j.conb.2004.07.007 – volume-title: Matrix computation year: 1983 ident: 937_CR18 – ident: 937_CR2 – ident: 937_CR56 – volume-title: Matrix analysis year: 2013 ident: 937_CR27 – volume: 70 start-page: 145 year: 2021 ident: 937_CR17 publication-title: Curr Opin Neurobiol doi: 10.1016/j.conb.2021.10.011 – ident: 937_CR21 – volume: 20 start-page: 1485 issue: 6 year: 2011 ident: 937_CR25 publication-title: IEEE Trans Image Process doi: 10.1109/TIP.2010.2103949 – ident: 937_CR42 – ident: 937_CR41 doi: 10.1007/978-3-642-21735-7_7 – ident: 937_CR39 doi: 10.1145/1553374.1553463 – ident: 937_CR65 – ident: 937_CR8 doi: 10.1109/ICASSP.1985.1168285 – volume: 11 start-page: 61 year: 2016 ident: 937_CR9 publication-title: Curr Opin Behav Sci doi: 10.1016/j.cobeha.2016.05.012 – volume: 43 start-page: 95 year: 2020 ident: 937_CR38 publication-title: Annu Rev Neurosci doi: 10.1146/annurev-neuro-090919-022842 – volume: 59 start-page: 291 issue: 4 year: 1988 ident: 937_CR7 publication-title: Biol Cybern doi: 10.1007/BF00332918 – volume: 43 start-page: 129 issue: 1 year: 2001 ident: 937_CR13 publication-title: SIAM J Sci Comput – volume-title: An introduction to cybernetics year: 1961 ident: 937_CR1 – volume-title: An introduction to support vector machines and other kernel-based learning methods year: 2000 ident: 937_CR14 doi: 10.1017/CBO9780511801389 – volume: 31 start-page: 111 year: 1978 ident: 937_CR10 publication-title: Num Math doi: 10.1007/BF01397471 – ident: 937_CR49 doi: 10.1109/ICASSP.2014.6854900 – ident: 937_CR51 doi: 10.1007/978-3-642-23783-6_41 – ident: 937_CR22 – ident: 937_CR5 doi: 10.7551/mitpress/7503.003.0024 – volume: 404 start-page: 93 year: 2020 ident: 937_CR12 publication-title: Neurocomputing doi: 10.1016/j.neucom.2020.04.057 – ident: 937_CR32 doi: 10.2307/2531996 – ident: 937_CR43 – volume: 2 start-page: 359 issue: 5 year: 1989 ident: 937_CR28 publication-title: Neural Netw doi: 10.1016/0893-6080(89)90020-8 – volume: 6 start-page: 837 issue: 4 year: 1995 ident: 937_CR4 publication-title: IEEE Trans Neural Netw doi: 10.1109/72.392248 – volume: 2013 start-page: 436 year: 2013 ident: 937_CR36 publication-title: Interspeech – ident: 937_CR54 – volume: 12 start-page: 1 issue: 1–17 year: 2005 ident: 937_CR11 publication-title: Univ California San Diego Tech Rep – volume: 50 start-page: 21 issue: 1 year: 2006 ident: 937_CR15 publication-title: Comput Stat Data Anal doi: 10.1016/j.csda.2004.07.010 – volume: 24 start-page: 417 issue: 6/7 year: 1933 ident: 937_CR29 publication-title: J Educ Psychol doi: 10.1037/h0071325 – ident: 937_CR31 doi: 10.1561/9781680836233 – ident: 937_CR26 doi: 10.1126/science.1127647 – volume: 2 start-page: 125 year: 2002 ident: 937_CR6 publication-title: J Mach Learn Res – volume: 58 start-page: 267 issue: 1 year: 1996 ident: 937_CR57 publication-title: J Roy Stat Soc: Ser B (Methodol) doi: 10.1111/j.2517-6161.1996.tb02080.x – ident: 937_CR40 |
| SSID | ssj0009259 |
| Score | 2.4869676 |
| Snippet | In Bourlard and Kamp (Biol Cybern 59(4):291–294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called... In Bourlard and Kamp (Biol Cybern 59(4):291-294, 1998), it was theoretically proven that autoencoders (AE) with single hidden layer (previously called... |
| SourceID | pubmedcentral proquest crossref springer |
| SourceType | Open Access Repository Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 389 |
| SubjectTerms | Artificial neural networks Bioinformatics Biomedical and Life Sciences Biomedicine Coders Complex Systems Computer Appl. in Life Sciences Decomposition Eigenvalues Empirical analysis Feature extraction Linear algebra Machine learning Manifolds (mathematics) Multilayer perceptrons Neural networks Neurobiology Neurosciences Principal components analysis Prospects Singular value decomposition Statistical analysis |
| SummonAdditionalLinks | – databaseName: Springer Journals New Starts & Take-Overs Collection dbid: RSV link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3dS8MwED90-uCL82NidUoF3zSQtknaPg5x-CBDUMfeSj5xIK1snX-_Sdd2bKigD3lKQj7uLrnj7n4HcB0IYQjXAWIpV4hIjFGqQoMoVzxWhKS8KtM5foxHo2QySZ_qpLB5E-3euCSrl7pNdqvgqpCLPndmeIzYNuxQhzbjbPTn8Qpq12r0dXrM9_PWv6CVXrkZFbnhGq1-nGH3f3s9gP1aw_QHS5Y4hC2dH0G3qd7g18J8DL3BoiwcjKULZfZn-r3gSqsevA7vX-4eUF0kAUlCgxJZfSrmITMiEMqwSKbEYBrqWErBY2asViolVlhITTXmSUQSYo2UJND2nhRTJDqBTl7k-hR8Qg2xD5gwIkkIMVJoHmEpGdWUEKGNB0Fzb5msEcRdIYv3rMU-rs6dYdfcuTPmwU0752OJn_Hr6H5DjqyWpXkWMmcTYpZQD67abisFzrXBc10s3BiHgxhb282DeI2M7aoOR3u9J5--VXjaqbUaLc94cNsQc7X4z3s9-9vwc9gLK35wsYN96JSzhb6AXflZTuezy4qDvwC-FOyl priority: 102 providerName: Springer Nature |
| Title | Autoencoders reloaded |
| URI | https://link.springer.com/article/10.1007/s00422-022-00937-6 https://www.proquest.com/docview/2689980685 https://www.proquest.com/docview/2679237966 https://pubmed.ncbi.nlm.nih.gov/PMC9287259 |
| Volume | 116 |
| WOSCitedRecordID | wos000814062200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAVX databaseName: Springer Journals New Starts & Take-Overs Collection customDbUrl: eissn: 1432-0770 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0009259 issn: 1432-0770 databaseCode: RSV dateStart: 19970101 isFulltext: true titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22 providerName: Springer Nature |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1La9wwEB7apIde8mhT6rzYQm-tqGzrYZ9CEhIKaZclbcPSi9GTBoKd7CO_PxpF3mUDyaUHDxjJSB6NpBnN6BuAz7nWnimXE1ErS5ihlNS28IQrq6RlrFYxTefVDzkcVuNxPUoHbtMUVtmviXGhtp3BM_JvhUDLgIqKH93eEcwahd7VlELjNaxj2myUczmWS9DdIiZLoyX6L4M4pEsz8epcBL8iGMuORr0kYnVjWmqbT2MlnzhM4z50vvm_f7AFG0kDHRw_isw2vHLtO9jsszsM0mR_DzvH81mHMJcY6jyYuJtOWWd34M_52e_T7yQlUSCG8XxGgr4lVSG8zrX1ojQ185QXThqjlRQ-aK3GUEu1cdxRVZWsYsGIqXIXuGeFZeUHWGu71n2EAeOehQVOe11VjHmjnSqpMYI7zph2PoO852BjEsI4Jrq4aRbYyJHrDcUHud6IDL4svrl9xNd4sfZ-z9omzbVps-RrBp8WxWGWoOtDta6bYx3ESZTBtstArgzoolXE2V4taa__RbztOliVQZIy-NoP_bLx5_u6-3Jf9-BtEaUPYwn3YW02mbsDeGPuZ9fTyWGU40irQ1g_ORuOLsPbhSSB_ixGSCXSEf8b6OWvqwcAlwAL |
| linkProvider | ProQuest |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1LT9wwEB4BRaKXUh5VU2gJEpyoheM4TnKoKkSLQCwrDrTilvqpIqGE7qNV_xS_sR5vsqtFghuHHnKyE1uZ8XjGM_4-gL1EKcelTYgopSFcU0pKwxzJpJG54byUgabzey_v94vr6_JyAe67uzBYVtnZxGCoTaPxjPyQCYwMqCiyz3e_CLJGYXa1o9CYqMW5_fvHh2zDT2dfvHz3GTv5enV8SlpWAaJ5loyId0ByyYRTiTJOpLrkjmbM5lormQvn3TitqaFK28xSWaS84N6rLxJLS2aE4an_7iK88G4Eo6FU8HIG8ssCORtNMV_q1a-9pBOu6gWwLYK183iIkBMxvxHOvNuHtZkPErRh3ztZ_d_-2Gt41XrY8dFkSazBgq3XYbVjr4hbY7YBm0fjUYMwnljKHQ_sbSONNZvw7Vkm9waW6qa2byHmmePegCunioJzp5WVKdVaZDbjXFkXQdJJrNItgjoSedxWU-znIOWK4oNSrkQEB9N37ib4IU_23u5EWbW2ZFjN5BjB7rTZWwFM7cjaNmPsgziQuY9dI8jnFGg6KuKIz7fUNz8Dnnjpo2avuRF87FRtNvjjc3339Fx3YOX06qJX9c7651vwkgXNx7rJbVgaDcb2PSzr36Ob4eBDWEMx_HhuFfwHjMpWaQ |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9QwEB6VLUJcKI8iUgoECU5g1XEcOzkgVGhXVK1WKwSot-CnqFQl7T5A_DV-HR5vsqutRG89cMjJTmxlxuMZz_j7AF5lWnuuXEZEpSzhhlJSWeZJoaySlvNKRZrObydyNCpPT6vxBvzp78JgWWVvE6Ohtq3BM_I9JjAyoKIs9nxXFjE-GL6_uCTIIIWZ1p5OY6Eix-73rxC-Td8dHQRZv2ZsePjl4yfSMQwQw4tsRoIzIhUTXmfaepGbintaMCeN0UoKH1w6Y6il2rjCUVXmvOTBwy8zRytmheV5-O4t2JR5CHoGsPnhcDT-vIL8ZZGqjeaYPQ3K2F3ZiRf3IvQWwUp6PFKQRKxviytf92ql5pV0bdwFh1v_8_-7D_c63zvdXyyWB7Dhmoew1fNapJ2ZewTb-_NZiwCfWOSdTtx5q6yz2_D1Rib3GAZN27gnkPLC82Datddlybk32qmcGiMKV3CunU8g66VXmw5bHSk-zuslKnSUeE3xQYnXIoE3y3cuFsgi1_be7cVad1ZmWq9kmsDLZXOwD5j0UY1r59gHESJliGoTkGvKtBwVEcbXW5qzHxFpvArxdNDiBN72arca_N9z3bl-ri_gTtC8-uRodPwU7rK4CLCgchcGs8ncPYPb5ufsbDp53i2oFL7ftA7-BSMBYIE |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Autoencoders+reloaded&rft.jtitle=Biological+cybernetics&rft.au=Bourlard%2C+Herv%C3%A9&rft.au=Kabil%2C+Selen+Hande&rft.date=2022-08-01&rft.issn=1432-0770&rft.eissn=1432-0770&rft.volume=116&rft.issue=4&rft.spage=389&rft_id=info:doi/10.1007%2Fs00422-022-00937-6&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1432-0770&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1432-0770&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1432-0770&client=summon |