Distributed quasi-Newton derivative-free optimization method for optimization problems with multiple local optima
The distributed Gauss-Newton (DGN) optimization method performs quite efficiently and robustly for history-matching problems with multiple best matches. However, this method is not applicable for generic optimization problems, e.g., life-cycle production optimization or well location optimization. T...
Gespeichert in:
| Veröffentlicht in: | Computational geosciences Jg. 26; H. 4; S. 847 - 863 |
|---|---|
| Hauptverfasser: | , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Cham
Springer International Publishing
01.08.2022
Springer Nature B.V |
| Schlagworte: | |
| ISSN: | 1420-0597, 1573-1499 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | The distributed Gauss-Newton (DGN) optimization method performs quite efficiently and robustly for history-matching problems with multiple best matches. However, this method is not applicable for generic optimization problems, e.g., life-cycle production optimization or well location optimization. This paper introduces a generalized form of the objective functions
F
(
x
,
y
(
x
)) =
f
(
x
) with both explicit variables
x
and implicit variables (or simulated responses),
y
(
x
). The split in explicit and implicit variables is such that partial derivatives of
F
(
x
,
y
) with respect to both
x
and
y
can be computed analytically. An ensemble of quasi-Newton optimization threads is distributed among multiple high-performance-computing (HPC) cluster nodes. The simulation results generated from one optimization thread are shared with others by updating a common set of training data points, which records simulated responses of all simulation jobs. The sensitivity matrix at the current best solution of each optimization thread is approximated by the linear-interpolation method. The gradient of the objective function is then analytically computed using its partial derivatives with respect to
x
and
y
and the estimated sensitivities of
y
with respect to
x
. The Hessian is updated using the quasi-Newton formulation. A new search point for each distributed optimization thread is generated by solving a quasi-Newton trust-region subproblem (TRS) for the next iteration. The proposed distributed quasi-Newton (DQN) method is first validated on a synthetic history matching problem and its performance is found to be comparable with the DGN optimizer. Then, the DQN method is tested on a variety of optimization problems. For all test problems, the DQN method can find multiple optima of the objective function with reasonably small numbers of iterations. |
|---|---|
| Bibliographie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1420-0597 1573-1499 |
| DOI: | 10.1007/s10596-021-10101-x |