Adaptive transformer-based multi-task learning framework for synchronous prediction of substation flooding and outage risks

•A novel Transformer-based multi-task prediction model synchronously predicts flooding and outage risks within substations.•Adaptive time coding improves temporal dependency modeling for flooding and outage predictions.•Feature fusion strategy handles multivariate inputs and reduces redundant inform...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Electric power systems research Ročník 242; s. 111450
Hlavní autoři: Shi, Yu, Shi, Ying, Yao, Degui, Lu, Ming, Liang, Yun
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 01.05.2025
Témata:
ISSN:0378-7796
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•A novel Transformer-based multi-task prediction model synchronously predicts flooding and outage risks within substations.•Adaptive time coding improves temporal dependency modeling for flooding and outage predictions.•Feature fusion strategy handles multivariate inputs and reduces redundant information.•Cost-sensitive learning loss reduces the impact of imbalance on the data side, while joint-weighted loss reduces the impact of imbalance on the model training side.•A novel data-driven model offers reliable decision support for substation flooding prevention. Flooding disasters significantly threaten substation security, and forecasting risks of flooding and resulting outages within the substation is crucial for taking preventive measures and enhancing the substation's resilience. Existing models may suffer from low accuracy of risk prediction due to the difficulty of handling nonlinear multi-factors, dynamic temporal dependencies, and unbalanced data. Additionally, they rarely forecast flooding and outages simultaneously, leading to incomplete risk assessments. Therefore, a novel Transformer-based multi-task learning model (MTformer) is proposed to simultaneously predict flooding and outage risk within substations. MTformer is an attention-based shared encoder-decoder architecture that can achieve shared feature extraction and collaborative prediction. This model adopts three improved strategies: adaptive temporal encoding to enhance temporal dependency extraction, feature perception strategy to fuse heterogeneous data inputs, and training balancing strategy to balance multi-task training and reduce the impact of data imbalance. The experiment results show that the MTformer effectively predicts substation flooding and outage risks and outperforms the mainstream predictive model, with a decrease of 47.96 % in RMSE for flooding prediction and an increase of 39.82 % in F1 for outage prediction. Case studies demonstrate the potential of MTformer as a decision-making tool for proactive disaster mitigation and recovery planning.
ISSN:0378-7796
DOI:10.1016/j.epsr.2025.111450