An efficient first-order conditional gradient algorithm in data-driven sparse identification of nonlinear dynamics to solve sparse recovery problems under noise

Governing equations are essential to the study of nonlinear dynamics, often enabling the prediction of previously unseen behaviors as well as the inclusion into control strategies. The discovery of governing equations from data thus has the potential to transform data-rich fields where well-establis...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Journal of computational and applied mathematics Ročník 470; s. 116675
Hlavní autori: Carderera, Alejandro, Pokutta, Sebastian, Schütte, Christof, Weiser, Martin
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier B.V 15.12.2025
Predmet:
ISSN:0377-0427
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Governing equations are essential to the study of nonlinear dynamics, often enabling the prediction of previously unseen behaviors as well as the inclusion into control strategies. The discovery of governing equations from data thus has the potential to transform data-rich fields where well-established dynamical models remain unknown. This work contributes to the recent trend in data-driven sparse identification of nonlinear dynamics of finding the best sparse fit to observational data in a large library of potential nonlinear models. We propose an efficient first-order Conditional Gradient algorithm for solving the underlying optimization problem. In comparison to the most prominent alternative framework, the new framework shows significantly improved performance on several essential issues like sparsity-induction, structure-preservation, noise robustness, and sample efficiency. We demonstrate these advantages on several dynamics from the field of synchronization, particle dynamics, and enzyme chemistry. •Identification of nonlinear dynamics from noidy data via sparse regression.•Enhanced sparsity improves robustness to noise, sample efficiency, and inference.•Conditional gradients based optimization promotes sparsity beyond l1 regularization.•Incorporating structure as linear (in)equality constraints is straightforward.
ISSN:0377-0427
DOI:10.1016/j.cam.2025.116675