Physical data embedding for memory efficient AI
Saved in:
| Title: | Physical data embedding for memory efficient AI |
|---|---|
| Authors: | Callen MacPhee, Yiming Zhou, Bahram Jalali |
| Source: | Machine Learning: Science and Technology, Vol 6, Iss 4, p 045018 (2025) |
| Publisher Information: | IOP Publishing |
| Publication Year: | 2025 |
| Collection: | Directory of Open Access Journals: DOAJ Articles |
| Subject Terms: | physics-AI symbiosis, interpretable AI, physics-inspired algorithms, physics-based neural networks, memory-efficient AI, Computer engineering. Computer hardware, TK7885-7895, Electronic computers. Computer science, QA75.5-76.95 |
| Description: | Deep neural networks have achieved exceptional performance across various fields by learning complex, nonlinear mappings from large-scale datasets. However, they face challenges such as high memory requirements and limited interpretability. This paper introduces an approach where master equations of physics are converted into multilayered networks that are trained via backpropagation. The resulting general-purpose model effectively encodes data in the properties of the underlying physical system. In contrast to existing methods wherein a trained neural network is used as a computationally efficient alternative for solving physical equations, our approach directly treats physics equations as trainable models. Rather than approximating physics with a neural network or augmenting a network with physics-inspired constraints, this framework makes the equation itself the architecture. We demonstrate this physical embedding concept with the nonlinear Schrödinger equation, which acts as trainable architecture for learning complex patterns including nonlinear mappings and memory effects from data. The network embeds data representation in orders of magnitude fewer parameters than conventional neural networks when tested on time series data. Notably, the trained ‘Nonlinear Schrödinger Network’ is interpretable, with all parameters having physical meanings. Curiously, this approach also provides a blueprint for implementing such AI computations in physical analog systems, offering a direct path toward low-latency and energy-efficient hardware realizations. The proposed method is also extended to the Gross-Pitaevskii equation, demonstrating the broad applicability of the framework to other master equations of physics. Among our results, an ablation study quantifies the relative importance of physical terms such as dispersion, nonlinearity, and potential energy for classification accuracy. We also outline the limitations and benefits of this approach as it relates to universality and generalizability. Overall, this work aims ... |
| Document Type: | article in journal/newspaper |
| Language: | English |
| Relation: | https://doi.org/10.1088/2632-2153/ae0f37; https://doaj.org/toc/2632-2153; https://doaj.org/article/b97d56612e9342afadd949cf144179ee |
| DOI: | 10.1088/2632-2153/ae0f37 |
| Availability: | https://doi.org/10.1088/2632-2153/ae0f37 https://doaj.org/article/b97d56612e9342afadd949cf144179ee |
| Accession Number: | edsbas.238D9702 |
| Database: | BASE |
| Abstract: | Deep neural networks have achieved exceptional performance across various fields by learning complex, nonlinear mappings from large-scale datasets. However, they face challenges such as high memory requirements and limited interpretability. This paper introduces an approach where master equations of physics are converted into multilayered networks that are trained via backpropagation. The resulting general-purpose model effectively encodes data in the properties of the underlying physical system. In contrast to existing methods wherein a trained neural network is used as a computationally efficient alternative for solving physical equations, our approach directly treats physics equations as trainable models. Rather than approximating physics with a neural network or augmenting a network with physics-inspired constraints, this framework makes the equation itself the architecture. We demonstrate this physical embedding concept with the nonlinear Schrödinger equation, which acts as trainable architecture for learning complex patterns including nonlinear mappings and memory effects from data. The network embeds data representation in orders of magnitude fewer parameters than conventional neural networks when tested on time series data. Notably, the trained ‘Nonlinear Schrödinger Network’ is interpretable, with all parameters having physical meanings. Curiously, this approach also provides a blueprint for implementing such AI computations in physical analog systems, offering a direct path toward low-latency and energy-efficient hardware realizations. The proposed method is also extended to the Gross-Pitaevskii equation, demonstrating the broad applicability of the framework to other master equations of physics. Among our results, an ablation study quantifies the relative importance of physical terms such as dispersion, nonlinearity, and potential energy for classification accuracy. We also outline the limitations and benefits of this approach as it relates to universality and generalizability. Overall, this work aims ... |
|---|---|
| DOI: | 10.1088/2632-2153/ae0f37 |
Nájsť tento článok vo Web of Science