The pre-trained explainable deep learning model with stacked denoising autoencoders for slope stability analysis
•A pretrained deep learning framework with stacked autoencoder is formulated for slope stability analysis in geotechnical engineering.•An explainable model is proposed from global and local perspectives and embedded in the deep learning framework to enable model explainability.•A series of data from...
Uloženo v:
| Vydáno v: | Engineering analysis with boundary elements Ročník 163; s. 406 - 425 |
|---|---|
| Hlavní autoři: | , , , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
Elsevier Ltd
01.06.2024
|
| Témata: | |
| ISSN: | 0955-7997, 1873-197X |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | •A pretrained deep learning framework with stacked autoencoder is formulated for slope stability analysis in geotechnical engineering.•An explainable model is proposed from global and local perspectives and embedded in the deep learning framework to enable model explainability.•A series of data from real-world slope records are collected and a visualized and illustrative feature learning is performed from both statistical and engineering aspects.•The proposed method's feasibility, accuracy and convergence are validated with a repeated stratified 10-fold cross-validation method.
In this work, we proposed a deeply-integrated explainable pre-trained deep learning framework with stacked denoising autoencoders in the assessment of slope stability. The deep learning model consists of a deep neural network as a trunk net for prediction and autoencoders as branch nets for denoising. A comprehensive review of machine learning algorithms in slope stability evaluation is first given in the introduction section. A series of 530 data is then collected from real slope records, which are visualized and investigated in feature engineering and further preprocessed for model training. To ensure reliable and trustworthy model interpretability, a unified model from both local and global perspectives is integrated into the deep learning model, which incorporated the ad hoc back-propagation based Deep SHAP, perturbation based Kernel SHAP and PDPs, and distillation based LIME and Anchors. For a fair evaluation, repeated stratified 10-fold cross-validation is adopted in model evaluation. The obtained results manifest that the constructed model outperforms commonly used machine learning methods in terms of accuracy and stability on the real-world slope data. The explainable model provides a reasonable explanation and validates the capability of the proposed model, and reflects the causes and dependencies of model predictions for a given sample.
[Display omitted] |
|---|---|
| ISSN: | 0955-7997 1873-197X |
| DOI: | 10.1016/j.enganabound.2024.03.019 |