On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale PDEs with physics-informed neural networks

Physics-informed neural networks (PINNs) are demonstrating remarkable promise in integrating physical models with gappy and noisy observational data, but they still struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features. In this work we investi...

Full description

Saved in:
Bibliographic Details
Published in:Computer methods in applied mechanics and engineering Vol. 384; no. C; p. 113938
Main Authors: Wang, Sifan, Wang, Hanwen, Perdikaris, Paris
Format: Journal Article
Language:English
Published: Amsterdam Elsevier B.V 01.10.2021
Elsevier BV
Elsevier
Subjects:
ISSN:0045-7825, 1879-2138
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Physics-informed neural networks (PINNs) are demonstrating remarkable promise in integrating physical models with gappy and noisy observational data, but they still struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features. In this work we investigate this limitation through the lens of Neural Tangent Kernel (NTK) theory and elucidate how PINNs are biased towards learning functions along the dominant eigen-directions of their limiting NTK. Using this observation, we construct novel architectures that employ spatio-temporal and multi-scale random Fourier features, and justify how such coordinate embedding layers can lead to robust and accurate PINN models. Numerical examples are presented for several challenging cases where conventional PINN models fail, including wave propagation and reaction–diffusion dynamics, illustrating how the proposed methods can be used to effectively tackle both forward and inverse problems involving partial differential equations with multi-scale behavior. All code an data accompanying this manuscript will be made publicly available at https://github.com/PredictiveIntelligenceLab/MultiscalePINNs. •We argue that spectral bias in deep neural networks in fact corresponds to “NTK eigenvector bias”.•We show that Fourier feature mappings can modulate the frequency of the NTK eigenvectors.•By analyzing the NTK eigenspace, we engineer new effective architectures for multi-scale problems.•We put forth a collection of challenging benchmarks for PINNs.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
USDOE Advanced Research Projects Agency - Energy (ARPA-E)
ISSN:0045-7825
1879-2138
DOI:10.1016/j.cma.2021.113938