A comparison of neural network architectures for data-driven reduced-order modeling
The popularity of deep convolutional autoencoders (CAEs) has engendered new and effective reduced-order models (ROMs) for the simulation of large-scale dynamical systems. Despite this, it is still unknown whether deep CAEs provide superior performance over established linear techniques or other netw...
Uložené v:
| Vydané v: | Computer methods in applied mechanics and engineering Ročník 393; s. 114764 |
|---|---|
| Hlavní autori: | , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Amsterdam
Elsevier B.V
01.04.2022
Elsevier BV Elsevier |
| Predmet: | |
| ISSN: | 0045-7825, 1879-2138 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | The popularity of deep convolutional autoencoders (CAEs) has engendered new and effective reduced-order models (ROMs) for the simulation of large-scale dynamical systems. Despite this, it is still unknown whether deep CAEs provide superior performance over established linear techniques or other network-based methods in all modeling scenarios. To elucidate this, the effect of autoencoder architecture on its associated ROM is studied through the comparison of deep CAEs against two alternatives: a simple fully connected autoencoder, and a novel graph convolutional autoencoder. Through benchmark experiments, it is shown that the superior autoencoder architecture for a given ROM application is highly dependent on the size of the latent space and the structure of the snapshot data, with the proposed architecture demonstrating benefits on data with irregular connectivity when the latent space is sufficiently large.
•Autoencoder neural networks facilitate effective nonlinear methods for reduced-order modeling (ROM).•Many popular ROMs employ a standard convolutional autoencoder which is only suitable for regular data.•A new ROM architecture is proposed which uses a graph convolutional autoencoder suitable for irregular data.•The proposed architecture is compared to ROMs based on POD, standard convolutional, and fully connected architectures.•Experiments show that the notion of best architecture is task-dependent, with the proposed ROM producing superior results in several cases. |
|---|---|
| AbstractList | The popularity of deep convolutional autoencoders (CAEs) has engendered new and effective reduced-order models (ROMs) for the simulation of large-scale dynamical systems. Despite this, it is still unknown whether deep CAEs provide superior performance over established linear techniques or other network-based methods in all modeling scenarios. To elucidate this, the effect of autoencoder architecture on its associated ROM is studied through the comparison of deep CAEs against two alternatives: a simple fully connected autoencoder, and a novel graph convolutional autoencoder. Through benchmark experiments, it is shown that the superior autoencoder architecture for a given ROM application is highly dependent on the size of the latent space and the structure of the snapshot data, with the proposed architecture demonstrating benefits on data with irregular connectivity when the latent space is sufficiently large. The popularity of deep convolutional autoencoders (CAEs) has engendered new and effective reduced-order models (ROMs) for the simulation of large-scale dynamical systems. Despite this, it is still unknown whether deep CAEs provide superior performance over established linear techniques or other network-based methods in all modeling scenarios. To elucidate this, the effect of autoencoder architecture on its associated ROM is studied through the comparison of deep CAEs against two alternatives: a simple fully connected autoencoder, and a novel graph convolutional autoencoder. Through benchmark experiments, it is shown that the superior autoencoder architecture for a given ROM application is highly dependent on the size of the latent space and the structure of the snapshot data, with the proposed architecture demonstrating benefits on data with irregular connectivity when the latent space is sufficiently large. •Autoencoder neural networks facilitate effective nonlinear methods for reduced-order modeling (ROM).•Many popular ROMs employ a standard convolutional autoencoder which is only suitable for regular data.•A new ROM architecture is proposed which uses a graph convolutional autoencoder suitable for irregular data.•The proposed architecture is compared to ROMs based on POD, standard convolutional, and fully connected architectures.•Experiments show that the notion of best architecture is task-dependent, with the proposed ROM producing superior results in several cases. |
| ArticleNumber | 114764 |
| Author | Wang, Zhu Ju, Lili Gruber, Anthony Gunzburger, Max |
| Author_xml | – sequence: 1 givenname: Anthony orcidid: 0000-0001-7107-5307 surname: Gruber fullname: Gruber, Anthony email: agruber@fsu.edu organization: Department of Scientific Computing, Florida State University, 400 Dirac Science Library, Tallahassee, 32306, FL, USA – sequence: 2 givenname: Max surname: Gunzburger fullname: Gunzburger, Max email: mgunzburger@fsu.edu organization: Department of Scientific Computing, Florida State University, 400 Dirac Science Library, Tallahassee, 32306, FL, USA – sequence: 3 givenname: Lili orcidid: 0000-0002-6520-582X surname: Ju fullname: Ju, Lili email: ju@math.sc.edu organization: Department of Mathematics, University of South Carolina, 1523 Greene Street, Columbia, 29208, SC, USA – sequence: 4 givenname: Zhu surname: Wang fullname: Wang, Zhu email: wangzhu@math.sc.edu organization: Department of Mathematics, University of South Carolina, 1523 Greene Street, Columbia, 29208, SC, USA |
| BackLink | https://www.osti.gov/servlets/purl/1865639$$D View this record in Osti.gov |
| BookMark | eNp9kE1LxDAQhoMouH78AG9Fz12TNE1bPIn4BYIH9RzSZKpZd5N1kir-e1PqyYNzGRjeZ3h5DsiuDx4IOWF0ySiT56ul2eglp5wvGRONFDtkwdqmKzmr2l2yoFTUZdPyep8cxLiieVrGF-TpsjBhs9XoYvBFGAoPI-p1Xukr4Huh0by5BCaNCLEYAhZWJ11adJ_gCwQ7GrBlQAtYbIKFtfOvR2Rv0OsIx7_7kLzcXD9f3ZUPj7f3V5cPpRGSpbLv-p5a0VeD7KjoqDbcNgKGpuKN4VrYoRM1Za2kmtUSqOC67XVrelrLId-qQ3I6_w0xORXN1PPNBO9zXZW5WlZdDp3NoS2GjxFiUqswos-9FJdyEtYxmVPNnDIYYkQYVP6mkws-oXZrxaiaNKuVyprVRKlZcybZH3KLbqPx-1_mYmYg2_l0gFN58Nmkw6m7De4f-geRjJaR |
| CitedBy_id | crossref_primary_10_1016_j_ijthermalsci_2024_109393 crossref_primary_10_1016_j_jcp_2024_112762 crossref_primary_10_1016_j_jcp_2023_112420 crossref_primary_10_1016_j_cma_2023_116351 crossref_primary_10_1016_j_eswa_2024_123137 crossref_primary_10_1016_j_ijmecsci_2024_109414 crossref_primary_10_1016_j_cma_2023_116334 crossref_primary_10_1155_2022_7555255 crossref_primary_10_1016_j_combustflame_2025_113981 crossref_primary_10_1063_5_0097740 crossref_primary_10_1016_j_jocs_2024_102400 crossref_primary_10_1109_TASE_2025_3549932 crossref_primary_10_1016_j_engappai_2023_105978 crossref_primary_10_1016_j_ijthermalsci_2023_108619 crossref_primary_10_1186_s40323_024_00273_3 crossref_primary_10_1007_s10915_022_02051_y crossref_primary_10_1016_j_cmpb_2024_108466 crossref_primary_10_1063_5_0213700 crossref_primary_10_5802_crmeca_191 crossref_primary_10_1016_j_cja_2024_08_012 crossref_primary_10_1007_s00466_024_02553_6 crossref_primary_10_1063_5_0088070 crossref_primary_10_1016_j_jcp_2023_112537 crossref_primary_10_1016_j_physd_2024_134470 crossref_primary_10_1007_s13344_025_0005_x crossref_primary_10_1016_j_jcp_2024_112953 crossref_primary_10_1016_j_physd_2025_134650 crossref_primary_10_1007_s00419_023_02458_5 crossref_primary_10_1016_j_cma_2024_117214 crossref_primary_10_3390_act12060235 crossref_primary_10_1016_j_cma_2024_117458 crossref_primary_10_1038_s41598_024_54067_z crossref_primary_10_3390_pr13041093 crossref_primary_10_1016_j_spc_2024_11_005 crossref_primary_10_1002_cnm_3848 crossref_primary_10_1007_s11424_024_3252_7 crossref_primary_10_1016_j_finel_2025_104351 crossref_primary_10_1007_s00466_023_02434_4 crossref_primary_10_1016_j_ast_2024_109207 crossref_primary_10_1002_nme_7372 crossref_primary_10_1016_j_flowmeasinst_2025_102890 crossref_primary_10_1007_s00162_023_00663_0 crossref_primary_10_1016_j_cma_2022_115645 crossref_primary_10_1016_j_cma_2025_117807 crossref_primary_10_1007_s10915_023_02331_1 crossref_primary_10_1007_s11831_025_10231_w |
| Cites_doi | 10.1109/TNNLS.2020.2978386 10.1007/s10915-021-01462-7 10.1016/j.acha.2010.04.005 10.1063/5.0039986 10.1145/3326362 10.1007/s10915-017-0433-8 10.1109/CVPR.2016.90 10.1145/1236463.1236469 10.1016/j.jcp.2019.108973 10.1006/jcph.2002.7146 10.1063/5.0020526 |
| ContentType | Journal Article |
| Copyright | 2022 Elsevier B.V. Copyright Elsevier BV Apr 1, 2022 |
| Copyright_xml | – notice: 2022 Elsevier B.V. – notice: Copyright Elsevier BV Apr 1, 2022 |
| CorporateAuthor | Univ. of South Carolina, Columbia, SC (United States) |
| CorporateAuthor_xml | – name: Univ. of South Carolina, Columbia, SC (United States) |
| DBID | AAYXX CITATION 7SC 7TB 8FD FR3 JQ2 KR7 L7M L~C L~D OIOZB OTOTI |
| DOI | 10.1016/j.cma.2022.114764 |
| DatabaseName | CrossRef Computer and Information Systems Abstracts Mechanical & Transportation Engineering Abstracts Technology Research Database Engineering Research Database ProQuest Computer Science Collection Civil Engineering Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional OSTI.GOV - Hybrid OSTI.GOV |
| DatabaseTitle | CrossRef Civil Engineering Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Mechanical & Transportation Engineering Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Engineering Research Database Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Civil Engineering Abstracts |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Engineering |
| EISSN | 1879-2138 |
| ExternalDocumentID | 1865639 10_1016_j_cma_2022_114764 S004578252200113X |
| GroupedDBID | --K --M -~X .DC .~1 0R~ 1B1 1~. 1~5 4.4 457 4G. 5GY 5VS 7-5 71M 8P~ 9JN AABNK AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAXUO AAYFN ABAOU ABBOA ABFNM ABJNI ABMAC ABYKQ ACAZW ACDAQ ACGFS ACIWK ACRLP ACZNC ADBBV ADEZE ADGUI ADTZH AEBSH AECPX AEKER AENEX AFKWA AFTJW AGHFR AGUBO AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIGVJ AIKHN AITUG AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ARUGR AXJTR BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EO8 EO9 EP2 EP3 F5P FDB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ IHE J1W JJJVA KOM LG9 LY7 M41 MHUIS MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. PQQKQ Q38 RNS ROL RPZ SDF SDG SDP SES SPC SPCBC SST SSV SSW SSZ T5K TN5 WH7 XPP ZMT ~02 ~G- 29F 9DU AAQXK AATTM AAXKI AAYWO AAYXX ABEFU ABWVN ABXDB ACLOT ACNNM ACRPL ACVFH ADCNI ADIYS ADJOM ADMUD ADNMO AEIPS AEUPX AFJKZ AFPUW AGQPQ AI. AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP ASPBG AVWKF AZFZN CITATION EFKBS EJD FEDTE FGOYB G-2 HLZ HVGLF HZ~ R2- SBC SET SEW VH1 VOH WUQ ZY4 ~HD 7SC 7TB 8FD FR3 JQ2 KR7 L7M L~C L~D AALMO AAPBV ABPIF ABPTK OIOZB OTOTI PQEST |
| ID | FETCH-LOGICAL-c461t-b9bb0d4b3f690490ac2d74ef7327c2a4df94501860a156e042a8ba8cb056f0a13 |
| ISICitedReferencesCount | 52 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000842498600008&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0045-7825 |
| IngestDate | Mon Jul 17 03:58:42 EDT 2023 Sun Nov 09 08:34:46 EST 2025 Sat Nov 29 07:30:18 EST 2025 Tue Nov 18 21:50:52 EST 2025 Fri Feb 23 02:40:53 EST 2024 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Reduced-order modeling Parametric PDEs 65M60 Graph convolution 65M22 Convolutional autoencoder Nonlinear dimensionality reduction |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-c461t-b9bb0d4b3f690490ac2d74ef7327c2a4df94501860a156e042a8ba8cb056f0a13 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 SC0020270; SC0020418 USDOE Office of Science (SC), Advanced ScientificComputing Research (ASCR). Scientific Discovery through Advanced Computing (SciDAC) |
| ORCID | 0000-0001-7107-5307 0000-0002-6520-582X 0000000171075307 000000026520582X |
| OpenAccessLink | https://www.osti.gov/servlets/purl/1865639 |
| PQID | 2662022916 |
| PQPubID | 2045269 |
| ParticipantIDs | osti_scitechconnect_1865639 proquest_journals_2662022916 crossref_citationtrail_10_1016_j_cma_2022_114764 crossref_primary_10_1016_j_cma_2022_114764 elsevier_sciencedirect_doi_10_1016_j_cma_2022_114764 |
| PublicationCentury | 2000 |
| PublicationDate | 2022-04-01 |
| PublicationDateYYYYMMDD | 2022-04-01 |
| PublicationDate_xml | – month: 04 year: 2022 text: 2022-04-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | Amsterdam |
| PublicationPlace_xml | – name: Amsterdam – name: United States |
| PublicationTitle | Computer methods in applied mechanics and engineering |
| PublicationYear | 2022 |
| Publisher | Elsevier B.V Elsevier BV Elsevier |
| Publisher_xml | – name: Elsevier B.V – name: Elsevier BV – name: Elsevier |
| References | Milano, Koumoutsakos (b1) 2002; 182 Schäfer, Turek, Durst, Krause, Rannacher (b28) 1996 Wu, Souza, Zhang, Fifty, Yu, Weinberger (b18) 2019 Bollobás (b14) 2013 Fu, Wang, Wang (b13) 2018; 74 Wang, Sun, Liu, Sarma, Bronstein, Solomon (b24) 2019; 38 Hammond, Vandergheynst, Gribonval (b16) 2011; 30 Gilmer, Schoenholz, Riley, Vinyals, Dahl (b21) 2017 Paszke, Gross, Massa, Lerer, Bradbury, Chanan, Killeen, Lin, Gimelshein, Antiga, Desmaison, Kopf, Yang, DeVito, Raison, Tejani, Chilamkurthy, Steiner, Fang, Bai, Chintala (b25) 2019 Qi, Yi, Su, Guibas (b22) 2017 Maulik, Lusch, Balaprakash (b8) 2021; 33 Kipf, Welling (b17) 2016 Fey, Lenssen (b26) 2019 Eivazi, Veisi, Naderi, Esfahanian (b6) 2020; 32 Fresca, Manzoni, Dedé (b7) 2021; 87 K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778. Hartman, Mestha (b3) 2017 Lee, Carlberg (b4) 2020; 404 Dumoulin, Visin (b12) 2016 Wu, Pan, Chen, Long, Zhang, Philip (b9) 2020; 32 Kashima (b2) 2016 Ioffe, Szegedy (b10) 2015 Fukami, Hasegawa, Nakamura, Morimoto, Fukagata (b5) 2020 Bresson, Laurent (b20) 2017 Defferrard, Bresson, Vandergheynst (b15) 2016; 29 Elman, Ramage, Silvester (b27) 2007; 33 Ma, Li, Wang (b23) 2019 Chen, Wei, Huang, Ding, Li (b11) 2020 Wu (10.1016/j.cma.2022.114764_b9) 2020; 32 Bresson (10.1016/j.cma.2022.114764_b20) 2017 Fu (10.1016/j.cma.2022.114764_b13) 2018; 74 Schäfer (10.1016/j.cma.2022.114764_b28) 1996 Milano (10.1016/j.cma.2022.114764_b1) 2002; 182 Hartman (10.1016/j.cma.2022.114764_b3) 2017 Fukami (10.1016/j.cma.2022.114764_b5) 2020 Eivazi (10.1016/j.cma.2022.114764_b6) 2020; 32 Gilmer (10.1016/j.cma.2022.114764_b21) 2017 Wu (10.1016/j.cma.2022.114764_b18) 2019 10.1016/j.cma.2022.114764_b19 Ma (10.1016/j.cma.2022.114764_b23) 2019 Dumoulin (10.1016/j.cma.2022.114764_b12) 2016 Fey (10.1016/j.cma.2022.114764_b26) 2019 Wang (10.1016/j.cma.2022.114764_b24) 2019; 38 Elman (10.1016/j.cma.2022.114764_b27) 2007; 33 Maulik (10.1016/j.cma.2022.114764_b8) 2021; 33 Chen (10.1016/j.cma.2022.114764_b11) 2020 Bollobás (10.1016/j.cma.2022.114764_b14) 2013 Qi (10.1016/j.cma.2022.114764_b22) 2017 Kipf (10.1016/j.cma.2022.114764_b17) 2016 Defferrard (10.1016/j.cma.2022.114764_b15) 2016; 29 Kashima (10.1016/j.cma.2022.114764_b2) 2016 Fresca (10.1016/j.cma.2022.114764_b7) 2021; 87 Ioffe (10.1016/j.cma.2022.114764_b10) 2015 Hammond (10.1016/j.cma.2022.114764_b16) 2011; 30 Lee (10.1016/j.cma.2022.114764_b4) 2020; 404 Paszke (10.1016/j.cma.2022.114764_b25) 2019 |
| References_xml | – volume: 182 start-page: 1 year: 2002 end-page: 26 ident: b1 article-title: Neural network modeling for near wall turbulent flow publication-title: J. Comput. Phys. – year: 2017 ident: b20 article-title: Residual gated graph convnets – start-page: 1917 year: 2017 end-page: 1922 ident: b3 article-title: A deep learning framework for model reduction of dynamical systems publication-title: 2017 IEEE Conference on Control Technology and Applications (CCTA) – start-page: 1725 year: 2020 end-page: 1735 ident: b11 article-title: Simple and deep graph convolutional networks publication-title: International Conference on Machine Learning – reference: K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778. – year: 2020 ident: b5 article-title: Model order reduction with neural networks: Application to laminar and turbulent flows – year: 2019 ident: b23 article-title: PAN: path integral based convolution for deep graph neural networks – start-page: 5750 year: 2016 end-page: 5755 ident: b2 article-title: Nonlinear model reduction by deep autoencoder of noise response data publication-title: 2016 IEEE 55th Conference on Decision and Control (CDC) – volume: 32 start-page: 4 year: 2020 end-page: 24 ident: b9 article-title: A comprehensive survey on graph neural networks publication-title: IEEE Trans. Neural Netw. Learn. Syst. – volume: 29 start-page: 3844 year: 2016 end-page: 3852 ident: b15 article-title: Convolutional neural networks on graphs with fast localized spectral filtering publication-title: Adv. Neural Inf. Process. Syst. – volume: 74 start-page: 220 year: 2018 end-page: 243 ident: b13 article-title: POD/DEIM reduced-order modeling of time-fractional partial differential equations with applications in parameter identification publication-title: J. Sci. Comput. – start-page: 547 year: 1996 end-page: 566 ident: b28 article-title: Benchmark computations of laminar flow around a cylinder publication-title: Flow Simulation with High-Performance Computers II – volume: 32 year: 2020 ident: b6 article-title: Deep neural networks for nonlinear model order reduction of unsteady flows publication-title: Phys. Fluids – start-page: 6861 year: 2019 end-page: 6871 ident: b18 article-title: Simplifying graph convolutional networks publication-title: International Conference on Machine Learning – volume: 33 start-page: 14 year: 2007 end-page: es ident: b27 article-title: Algorithm 866: IFISS, a matlab toolbox for modelling incompressible flow publication-title: ACM Trans. Math. Softw. – volume: 404 year: 2020 ident: b4 article-title: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders publication-title: J. Comput. Phys. – year: 2016 ident: b12 article-title: A guide to convolution arithmetic for deep learning – year: 2013 ident: b14 article-title: Modern Graph Theory, Vol. 184 – start-page: 8024 year: 2019 end-page: 8035 ident: b25 article-title: PyTorch: An imperative style, high-performance deep learning library publication-title: Advances in Neural Information Processing Systems 32 – year: 2019 ident: b26 article-title: Fast graph representation learning with pyTorch geometric – year: 2017 ident: b22 article-title: Pointnet++: Deep hierarchical feature learning on point sets in a metric space – volume: 87 start-page: 1 year: 2021 end-page: 36 ident: b7 article-title: A comprehensive deep learning-based approach to reduced order modeling of nonlinear time-dependent parametrized PDEs publication-title: J. Sci. Comput. – volume: 30 start-page: 129 year: 2011 end-page: 150 ident: b16 article-title: Wavelets on graphs via spectral graph theory publication-title: Appl. Comput. Harmon. Anal. – start-page: 448 year: 2015 end-page: 456 ident: b10 article-title: Batch normalization: Accelerating deep network training by reducing internal covariate shift publication-title: International Conference on Machine Learning – volume: 38 start-page: 1 year: 2019 end-page: 12 ident: b24 article-title: Dynamic graph cnn for learning on point clouds publication-title: ACM Trans. Graph. – year: 2016 ident: b17 article-title: Semi-supervised classification with graph convolutional networks – volume: 33 year: 2021 ident: b8 article-title: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders publication-title: Phys. Fluids – start-page: 1263 year: 2017 end-page: 1272 ident: b21 article-title: Neural message passing for quantum chemistry publication-title: International Conference on Machine Learning – start-page: 1725 year: 2020 ident: 10.1016/j.cma.2022.114764_b11 article-title: Simple and deep graph convolutional networks – start-page: 1263 year: 2017 ident: 10.1016/j.cma.2022.114764_b21 article-title: Neural message passing for quantum chemistry – start-page: 448 year: 2015 ident: 10.1016/j.cma.2022.114764_b10 article-title: Batch normalization: Accelerating deep network training by reducing internal covariate shift – volume: 32 start-page: 4 issue: 1 year: 2020 ident: 10.1016/j.cma.2022.114764_b9 article-title: A comprehensive survey on graph neural networks publication-title: IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2020.2978386 – volume: 29 start-page: 3844 year: 2016 ident: 10.1016/j.cma.2022.114764_b15 article-title: Convolutional neural networks on graphs with fast localized spectral filtering publication-title: Adv. Neural Inf. Process. Syst. – start-page: 5750 year: 2016 ident: 10.1016/j.cma.2022.114764_b2 article-title: Nonlinear model reduction by deep autoencoder of noise response data – volume: 87 start-page: 1 issue: 2 year: 2021 ident: 10.1016/j.cma.2022.114764_b7 article-title: A comprehensive deep learning-based approach to reduced order modeling of nonlinear time-dependent parametrized PDEs publication-title: J. Sci. Comput. doi: 10.1007/s10915-021-01462-7 – year: 2020 ident: 10.1016/j.cma.2022.114764_b5 – volume: 30 start-page: 129 issue: 2 year: 2011 ident: 10.1016/j.cma.2022.114764_b16 article-title: Wavelets on graphs via spectral graph theory publication-title: Appl. Comput. Harmon. Anal. doi: 10.1016/j.acha.2010.04.005 – start-page: 6861 year: 2019 ident: 10.1016/j.cma.2022.114764_b18 article-title: Simplifying graph convolutional networks – year: 2017 ident: 10.1016/j.cma.2022.114764_b20 – year: 2019 ident: 10.1016/j.cma.2022.114764_b23 – start-page: 8024 year: 2019 ident: 10.1016/j.cma.2022.114764_b25 article-title: PyTorch: An imperative style, high-performance deep learning library – volume: 33 issue: 3 year: 2021 ident: 10.1016/j.cma.2022.114764_b8 article-title: Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders publication-title: Phys. Fluids doi: 10.1063/5.0039986 – year: 2019 ident: 10.1016/j.cma.2022.114764_b26 – volume: 38 start-page: 1 issue: 5 year: 2019 ident: 10.1016/j.cma.2022.114764_b24 article-title: Dynamic graph cnn for learning on point clouds publication-title: ACM Trans. Graph. doi: 10.1145/3326362 – year: 2013 ident: 10.1016/j.cma.2022.114764_b14 – start-page: 1917 year: 2017 ident: 10.1016/j.cma.2022.114764_b3 article-title: A deep learning framework for model reduction of dynamical systems – volume: 74 start-page: 220 issue: 1 year: 2018 ident: 10.1016/j.cma.2022.114764_b13 article-title: POD/DEIM reduced-order modeling of time-fractional partial differential equations with applications in parameter identification publication-title: J. Sci. Comput. doi: 10.1007/s10915-017-0433-8 – ident: 10.1016/j.cma.2022.114764_b19 doi: 10.1109/CVPR.2016.90 – volume: 33 start-page: 14 issue: 2 year: 2007 ident: 10.1016/j.cma.2022.114764_b27 article-title: Algorithm 866: IFISS, a matlab toolbox for modelling incompressible flow publication-title: ACM Trans. Math. Softw. doi: 10.1145/1236463.1236469 – volume: 404 year: 2020 ident: 10.1016/j.cma.2022.114764_b4 article-title: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders publication-title: J. Comput. Phys. doi: 10.1016/j.jcp.2019.108973 – year: 2016 ident: 10.1016/j.cma.2022.114764_b12 – volume: 182 start-page: 1 issue: 1 year: 2002 ident: 10.1016/j.cma.2022.114764_b1 article-title: Neural network modeling for near wall turbulent flow publication-title: J. Comput. Phys. doi: 10.1006/jcph.2002.7146 – start-page: 547 year: 1996 ident: 10.1016/j.cma.2022.114764_b28 article-title: Benchmark computations of laminar flow around a cylinder – year: 2017 ident: 10.1016/j.cma.2022.114764_b22 – volume: 32 issue: 10 year: 2020 ident: 10.1016/j.cma.2022.114764_b6 article-title: Deep neural networks for nonlinear model order reduction of unsteady flows publication-title: Phys. Fluids doi: 10.1063/5.0020526 – year: 2016 ident: 10.1016/j.cma.2022.114764_b17 |
| SSID | ssj0000812 |
| Score | 2.6068068 |
| Snippet | The popularity of deep convolutional autoencoders (CAEs) has engendered new and effective reduced-order models (ROMs) for the simulation of large-scale... |
| SourceID | osti proquest crossref elsevier |
| SourceType | Open Access Repository Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 114764 |
| SubjectTerms | Computer architecture Convolutional autoencoder ENGINEERING Graph convolution Modelling Neural networks Nonlinear dimensionality reduction Parametric PDEs Reduced order models Reduced-order modeling System effectiveness |
| Title | A comparison of neural network architectures for data-driven reduced-order modeling |
| URI | https://dx.doi.org/10.1016/j.cma.2022.114764 https://www.proquest.com/docview/2662022916 https://www.osti.gov/servlets/purl/1865639 |
| Volume | 393 |
| WOSCitedRecordID | wos000842498600008&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Elsevier SD Freedom Collection Journals 2021 customDbUrl: eissn: 1879-2138 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000812 issn: 0045-7825 databaseCode: AIEXJ dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3NjxIxFG-Q9aAHP1aNuKvpwXiQ1DDf7ZEYNmoImshuiJdm2mnjbnBAYDbEo3-5r9OWGfwgevAyIR06QN-P11_b934PoedhJCKmspTAJSLAwCUROguIBvrLCppIWsfmXIyzyYTOZuxDp_Pd58Jcz7OypNstW_5XU0MbGNukzv6DuXcPhQZ4DUaHK5gdrn9l-KGLK3e1BftGsRLsUNp473775KDWYuibIFFSrIzb66-MkqsqSK3Iacvk-LnNyxm4MhCu9nQdTps7KvtFmTxir_usGqnDJsqnEq58thUt2N2oym-iztC2CUTbXWRPZXcO5pfN1r91T58-V-0tC1jtNpEu9T7aLpfmou2a44QAXUnarjliUcu5Br91-Xb34eqVrGWkwtCoH2dWGX1fXnvynp-dj8d8OppNXyy_ElN5zJzQuzIsN9BRmCUMnPvR8O1o9q6Zz2lgNefdF_Rn43WU4E-f-id2012Aw_5luq85zPQeuuMWH3hoQXMfdVR5jO66hQh2bn59jG63VCofoI9D3CAKLzS2iMIOUXgPURgQhVuIwnuIwh5RD9H52Wj6-g1xtTiIjNNgQwQTYlDEItIpM4fFuQyLLFY6i8JMhnlcaBYbbch0kAdJqmAqyKnIqRRAsDW0RY9Qt1yU6jHCMpExDaKcBYICndQiT3SealaIIqYqFT008GPIpROqN_VS5txHJF5xGHZuhp3bYe-hl7suS6vScujNsTcMdzTT0kcOkDrU7cQY0XQx8srSxKFBH_jJCbD8Hjr1tuXOGaw5kF_zAFiBPTl8-wTdav4np6i7WVXqKboprzeX69UzB8gftISxwA |
| linkProvider | Elsevier |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+comparison+of+neural+network+architectures+for+data-driven+reduced-order+modeling&rft.jtitle=Computer+methods+in+applied+mechanics+and+engineering&rft.au=Gruber%2C+Anthony&rft.au=Gunzburger%2C+Max&rft.au=Ju%2C+Lili&rft.au=Wang%2C+Zhu&rft.date=2022-04-01&rft.pub=Elsevier+BV&rft.issn=0045-7825&rft.volume=393&rft.spage=1&rft_id=info:doi/10.1016%2Fj.cma.2022.114764&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0045-7825&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0045-7825&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0045-7825&client=summon |