Risk-averse constrained blackbox optimization under mixed aleatory/epistemic uncertainties

This paper addresses risk-averse constrained optimization problems where the objective and constraint functions can only be computed by a blackbox subject to unknown uncertainties. To handle mixed aleatory/epistemic uncertainties, the problem is transformed into a conditional value-at-risk (CVaR) co...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Computational optimization and applications Ročník 92; číslo 2; s. 375 - 435
Hlavní autoři: Audet, Charles, Bigeon, Jean, Couderc, Romain, Kokkolaras, Michael
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Springer US 01.11.2025
Springer Nature B.V
Témata:
ISSN:0926-6003, 1573-2894
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:This paper addresses risk-averse constrained optimization problems where the objective and constraint functions can only be computed by a blackbox subject to unknown uncertainties. To handle mixed aleatory/epistemic uncertainties, the problem is transformed into a conditional value-at-risk (CVaR) constrained optimization problem. General inequality constraints are managed through Lagrangian relaxation. A convolution of the Lagrangian function with a truncated Gaussian density is used to smooth the problem. A gradient estimator of the smooth Lagrangian function is derived, possessing attractive properties: it estimates the gradient with only two outputs of the blackbox, regardless of dimension, and evaluates the blackbox only within the bound constraints. This gradient estimator is then utilized in a multi-timescale stochastic approximation algorithm to solve the smooth problem. Under mild assumptions, this algorithm almost surely converges to a feasible point of the CVaR-constrained problem whose objective function value is arbitrarily close to that of a local solution. Finally, numerical experiments are conducted to serve three purposes. Firstly, they provide insights on how to set the hyperparameter values of the algorithm. Secondly, they demonstrate the effectiveness of the algorithm when a truncated Gaussian gradient estimator is used. Lastly, they show its ability to handle mixed aleatory/epistemic uncertainties in practical applications.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0926-6003
1573-2894
DOI:10.1007/s10589-025-00704-w