A machine-learning-accelerated distributed LBFGS method for field development optimization: algorithm, validation, and applications

We have developed a support vector regression (SVR) accelerated variant of the distributed derivative-free optimization (DFO) method using the limited-memory BFGS Hessian updating formulation (LBFGS) for subsurface field-development optimization problems. The SVR-enhanced distributed LBFGS (D-LBFGS)...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Computational geosciences Ročník 27; číslo 3; s. 425 - 450
Hlavní autori: Alpak, Faruk, Gao, Guohua, Florez, Horacio, Shi, Steve, Vink, Jeroen, Blom, Carl, Saaf, Fredrik, Wells, Terence
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Cham Springer International Publishing 01.06.2023
Springer Nature B.V
Predmet:
ISSN:1420-0597, 1573-1499
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:We have developed a support vector regression (SVR) accelerated variant of the distributed derivative-free optimization (DFO) method using the limited-memory BFGS Hessian updating formulation (LBFGS) for subsurface field-development optimization problems. The SVR-enhanced distributed LBFGS (D-LBFGS) optimizer is designed to effectively locate multiple local optima of highly nonlinear optimization problems subject to numerical noise. It operates both on single- and multiple-objective field-development optimization problems. The basic D-LBFGS DFO optimizer runs multiple optimization threads in parallel and uses the linear interpolation method to approximate the sensitivity matrix of simulated responses with respect to optimized model parameters. However, this approach is less accurate and slows down convergence. In this paper, we implement an effective variant of the SVR method, namely ε -SVR, and integrate it into the D-LBFGS engine in synchronous mode within the framework of a versatile optimization library inside a next-generation reservoir simulation platform. Because ε -SVR has a closed-form of predictive formulation, we analytically calculate the approximated objective function and its gradients with respect to input model variables subject to optimization. We investigate two different methods to propose a new search point for each optimization thread in each iteration through seamless integration of ε -SVR with the D-LBFGS optimizer. The first method estimates the sensitivity matrix and the gradients directly using the analytical ε -SVR surrogate and then solves a LBFGS trust-region subproblem (TRS). The second method applies a trust-region search LBFGS method to optimize the approximated objective function using the analytical ε -SVR surrogate within a box-shaped trust region. We first show that ε -SVR provides accurate estimates of gradient vectors on a set of nonlinear analytical test problems. We then report the results of numerical experiments conducted using the newly proposed SVR-enhanced D-LBFGS algorithms on both synthetic and realistic field-development optimization problems. We demonstrate that these algorithms operate effectively on realistic nonlinear optimization problems subject to numerical noise. We show that both SVR-enhanced D-LBFGS variants converge faster and thereby provide a significant acceleration over the basic implementation of D-LBFGS with linear interpolation.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1420-0597
1573-1499
DOI:10.1007/s10596-023-10197-3