Enhancing hierarchical learning of real-time optimization and model predictive control for operational performance

In process control, the integration of Real-Time Optimization (RTO) and Model Predictive Control (MPC) enables the system to achieve optimal control over both long-term and short-term horizons, thereby enhancing operational efficiency and economic performance. However, this integration still faces s...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Journal of process control Ročník 155; s. 103559
Hlavní autoři: Ren, Rui, Li, Shaoyuan
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Ltd 01.11.2025
Témata:
ISSN:0959-1524
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In process control, the integration of Real-Time Optimization (RTO) and Model Predictive Control (MPC) enables the system to achieve optimal control over both long-term and short-term horizons, thereby enhancing operational efficiency and economic performance. However, this integration still faces several challenges. In the two-layer structure, the upper layer RTO involves solving nonlinear programming problems with significant computational complexity, making it difficult to obtain feasible solutions in real-time within the limited optimization horizon. Simultaneously, the lower layer MPC must solve rolling optimization problems within a constrained time frame, placing higher demands on real-time performance. Additionally, uncertainties in the system affect both optimization and control performance. To address these issues, this paper proposes a noval hierarchical learning approach for RTO and MPC controller using reinforcement learning. This method learns the optimal strategies for RTO and MPC across different time scales, effectively mitigating the high computational costs associated with online computations. Through reward design and experience replay during the hierarchical learning process, efficient training of the upper and lower layer strategies is achieved. Offline training under various uncertainty scenarios, combined with online learning, effectively reduces performance degradation due to model uncertainties. The proposed approach demonstrates excellent performance in two representative chemical engineering case studies. •A new hierarchical learning approach is designed to learn RTO and MPC strategies over different time scales. This method determines steady-state setpoints and lower-layer controllers, effectively eliminating the need for the repeated online calculations required in two-layer architectures.•The proposed method adopts a combination of offline training and online learning. It explicitly accounts for the impact of uncertainties on the two-layer structure, effectively enhancing the system’s adaptability to dynamic changes.•The proposed algorithm is validated in two representative chemical engineering case studies, demonstrating its potential for industrial process control applications.
ISSN:0959-1524
DOI:10.1016/j.jprocont.2025.103559