A Stochastic Simplex Approximate Gradient (StoSAG) for optimization under uncertainty

Summary We consider a technique to estimate an approximate gradient using an ensemble of randomly chosen control vectors, known as Ensemble Optimization (EnOpt) in the oil and gas reservoir simulation community. In particular, we address how to obtain accurate approximate gradients when the underlyi...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:International journal for numerical methods in engineering Ročník 109; číslo 13; s. 1756 - 1776
Hlavní autoři: Fonseca, Rahul Rahul‐Mark, Chen, Bailian, Jansen, Jan Dirk, Reynolds, Albert
Médium: Journal Article
Jazyk:angličtina
Vydáno: Bognor Regis Wiley Subscription Services, Inc 30.03.2017
Témata:
ISSN:0029-5981, 1097-0207
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Summary We consider a technique to estimate an approximate gradient using an ensemble of randomly chosen control vectors, known as Ensemble Optimization (EnOpt) in the oil and gas reservoir simulation community. In particular, we address how to obtain accurate approximate gradients when the underlying numerical models contain uncertain parameters because of geological uncertainties. In that case, ‘robust optimization’ is performed by optimizing the expected value of the objective function over an ensemble of geological models. In earlier publications, based on the pioneering work of Chen et al. (2009), it has been suggested that a straightforward one‐to‐one combination of random control vectors and random geological models is capable of generating sufficiently accurate approximate gradients. However, this form of EnOpt does not always yield satisfactory results. In a recent article, Fonseca et al. (2015) formulate a modified EnOpt algorithm, referred to here as a Stochastic Simplex Approximate Gradient (StoSAG; in earlier publications referred to as ‘modified robust EnOpt’) and show, via computational experiments, that StoSAG generally yields significantly better gradient approximations than the standard EnOpt algorithm. Here, we provide theoretical arguments to show why StoSAG is superior to EnOpt. © 2016 The Authors. International Journal for Numerical Methods in Engineering Published by John Wiley & Sons, Ltd.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0029-5981
1097-0207
DOI:10.1002/nme.5342