An integrated feature extraction framework of linear multi-layer perceptron to reduce computation complexity for remaining useful life prediction

Recently, there has been a growth in deep learning-based solutions for RUL prediction, although these increasingly complex models have significantly improved prediction performance, these studies typically overlook the computational and storage resources required for model deployment. Thus, we attem...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Engineering applications of artificial intelligence Ročník 141; s. 109846
Hlavní autoři: Gao, Hui, Guo, Qingwen, Zhang, Zhizheng, Li, Yibin
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Ltd 01.02.2025
Témata:
ISSN:0952-1976
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Recently, there has been a growth in deep learning-based solutions for RUL prediction, although these increasingly complex models have significantly improved prediction performance, these studies typically overlook the computational and storage resources required for model deployment. Thus, we attempt to construct a lightweight model based on a simple linear multi-layer perceptron (MLP) that achieves prediction performance comparable to complex models, while ensuring easier deployment on resource-constrained edge devices. Firstly, a feature reconstruction method based on unsupervised clustering is proposed, which uses the K-means algorithm to perform unsupervised clustering on the variable operating condition data, and then standardization is conducted according to the mean and variance of each class, so as to separate the degradation features from the operating condition. Then, we propose a time-series linear extractor (TiLE) architecture for extracting degradation features from multi-sensor data. This lightweight framework achieves the advantage of linear computational scalability, which improves the inference efficiency of the model. The feature recalibration mechanism of TiLE is designed to reduce the interference of random factors, which is conducive to improving the prediction accuracy. Experimental results on the NASA turbine engine dataset show that the TiLE-based model outperforms state-of-the-art methods while achieving superior computational complexity and inference efficiency.
ISSN:0952-1976
DOI:10.1016/j.engappai.2024.109846