The pre-trained explainable deep learning model with stacked denoising autoencoders for slope stability analysis

•A pretrained deep learning framework with stacked autoencoder is formulated for slope stability analysis in geotechnical engineering.•An explainable model is proposed from global and local perspectives and embedded in the deep learning framework to enable model explainability.•A series of data from...

Full description

Saved in:
Bibliographic Details
Published in:Engineering analysis with boundary elements Vol. 163; pp. 406 - 425
Main Authors: Lin, Shan, Dong, Miao, Cao, Xitailang, Liang, Zenglong, Guo, Hongwei, Zheng, Hong
Format: Journal Article
Language:English
Published: Elsevier Ltd 01.06.2024
Subjects:
ISSN:0955-7997, 1873-197X
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•A pretrained deep learning framework with stacked autoencoder is formulated for slope stability analysis in geotechnical engineering.•An explainable model is proposed from global and local perspectives and embedded in the deep learning framework to enable model explainability.•A series of data from real-world slope records are collected and a visualized and illustrative feature learning is performed from both statistical and engineering aspects.•The proposed method's feasibility, accuracy and convergence are validated with a repeated stratified 10-fold cross-validation method. In this work, we proposed a deeply-integrated explainable pre-trained deep learning framework with stacked denoising autoencoders in the assessment of slope stability. The deep learning model consists of a deep neural network as a trunk net for prediction and autoencoders as branch nets for denoising. A comprehensive review of machine learning algorithms in slope stability evaluation is first given in the introduction section. A series of 530 data is then collected from real slope records, which are visualized and investigated in feature engineering and further preprocessed for model training. To ensure reliable and trustworthy model interpretability, a unified model from both local and global perspectives is integrated into the deep learning model, which incorporated the ad hoc back-propagation based Deep SHAP, perturbation based Kernel SHAP and PDPs, and distillation based LIME and Anchors. For a fair evaluation, repeated stratified 10-fold cross-validation is adopted in model evaluation. The obtained results manifest that the constructed model outperforms commonly used machine learning methods in terms of accuracy and stability on the real-world slope data. The explainable model provides a reasonable explanation and validates the capability of the proposed model, and reflects the causes and dependencies of model predictions for a given sample. [Display omitted]
ISSN:0955-7997
1873-197X
DOI:10.1016/j.enganabound.2024.03.019