An adjoint-free algorithm for conditional nonlinear optimal perturbations (CNOPs) via sampling
In this paper, we propose a sampling algorithm based on state-of-the-art statistical machine learning techniques to obtain conditional nonlinear optimal perturbations (CNOPs), which is different from traditional (deterministic) optimization methods.1 Specifically, the traditional approach is unavail...
Saved in:
| Published in: | Nonlinear processes in geophysics Vol. 30; no. 3; pp. 263 - 276 |
|---|---|
| Main Authors: | , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Gottingen
Copernicus GmbH
06.07.2023
Copernicus Publications |
| Subjects: | |
| ISSN: | 1607-7946, 1023-5809, 1607-7946 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | In this paper, we propose a sampling algorithm based on state-of-the-art statistical machine learning techniques to obtain conditional nonlinear
optimal perturbations (CNOPs), which is different from traditional (deterministic) optimization methods.1 Specifically, the traditional approach is unavailable in practice, which requires numerically computing the gradient (first-order
information) such that the computation cost is expensive, since it needs a large number of times to run numerical models. However, the sampling
approach directly reduces the gradient to the objective function value (zeroth-order information), which also avoids using the adjoint technique
that is unusable for many atmosphere and ocean models and requires large amounts of storage. We show an intuitive analysis for the sampling
algorithm from the law of large numbers and further present a Chernoff-type concentration inequality to rigorously characterize the degree to which
the sample average probabilistically approximates the exact gradient. The experiments are implemented to obtain the CNOPs for two numerical models,
the Burgers equation with small viscosity and the Lorenz-96 model. We demonstrate the CNOPs obtained with their spatial patterns, objective values,
computation times, and nonlinear error growth. Compared with the performance of the three approaches, all the characters for quantifying the CNOPs
are nearly consistent, while the computation time using the sampling approach with fewer samples is much shorter. In other words, the new
sampling algorithm shortens the computation time to the utmost at the cost of losing little accuracy. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1607-7946 1023-5809 1607-7946 |
| DOI: | 10.5194/npg-30-263-2023 |