AK-Gibbs: An active learning Kriging model based on Gibbs importance sampling algorithm for small failure probabilities

•This study proposes an AK-Gibbs method for small failure probabilities with nonlinear and time-consuming performance function.•EALF function directly linked to global error is constructed.•IEALF function based on proposed EALF and joint probability density function is proposed.•Gibbs importance sam...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Computer methods in applied mechanics and engineering Ročník 426; s. 116992
Hlavní autoři: Zhang, Wei, Zhao, Ziyi, Xu, Huanwei, Li, Xiaoyu, Wang, Zhonglai
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 01.06.2024
Témata:
ISSN:0045-7825, 1879-2138
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•This study proposes an AK-Gibbs method for small failure probabilities with nonlinear and time-consuming performance function.•EALF function directly linked to global error is constructed.•IEALF function based on proposed EALF and joint probability density function is proposed.•Gibbs importance sampling algorithm is derived based on Gibbs algorithm aims to effectively establish the candidate importance sample pool.•The AK-Gibbs is more efficient and accuracy than other prevailing methods for small failure probabilities. In engineering practices, it is a time-consuming procedure to estimate the small failure probability of highly nonlinear and dimensional limit state functions and Kriging-based methods are more effective representatives. However, it is an important challenge to construct the candidate importance sample pool for Kriging-based small failure probability analysis methods with multiple input random variables when the Metropolis-Hastings (M-H) algorithm with acceptance-rejection sampling principle is employed. To address the challenge and estimate the reliability of structures in a more efficient and accurate way, an active learning Kriging model based on the Gibbs importance sampling algorithm (AK-Gibbs) is proposed, especially for the small failure probabilities with nonlinear and high-dimensional limit state functions. A new active learning function that can be directly linked to the global error is first constructed. Weighting coefficients of the joint probability density function in the new active learning function are then determined to select the most probable points (MPPs) and update samples efficiently and accurately. The Gibbs importance sampling algorithm is derived based on the Gibbs algorithm to effectively establish the candidate importance sample pool. An improved global error-based stopping criterion is finally constructed to avoid pre-mature or late-mature for the estimation of small failure probabilities with complicated failure domains. Two numerical and four engineering examples are respectively employed to elaborate and validate the effectiveness of the proposed method.
ISSN:0045-7825
1879-2138
DOI:10.1016/j.cma.2024.116992