Enhancing hierarchical learning of real-time optimization and model predictive control for operational performance

In process control, the integration of Real-Time Optimization (RTO) and Model Predictive Control (MPC) enables the system to achieve optimal control over both long-term and short-term horizons, thereby enhancing operational efficiency and economic performance. However, this integration still faces s...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Journal of process control Ročník 155; s. 103559
Hlavní autoři: Ren, Rui, Li, Shaoyuan
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Ltd 01.11.2025
Témata:
ISSN:0959-1524
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract In process control, the integration of Real-Time Optimization (RTO) and Model Predictive Control (MPC) enables the system to achieve optimal control over both long-term and short-term horizons, thereby enhancing operational efficiency and economic performance. However, this integration still faces several challenges. In the two-layer structure, the upper layer RTO involves solving nonlinear programming problems with significant computational complexity, making it difficult to obtain feasible solutions in real-time within the limited optimization horizon. Simultaneously, the lower layer MPC must solve rolling optimization problems within a constrained time frame, placing higher demands on real-time performance. Additionally, uncertainties in the system affect both optimization and control performance. To address these issues, this paper proposes a noval hierarchical learning approach for RTO and MPC controller using reinforcement learning. This method learns the optimal strategies for RTO and MPC across different time scales, effectively mitigating the high computational costs associated with online computations. Through reward design and experience replay during the hierarchical learning process, efficient training of the upper and lower layer strategies is achieved. Offline training under various uncertainty scenarios, combined with online learning, effectively reduces performance degradation due to model uncertainties. The proposed approach demonstrates excellent performance in two representative chemical engineering case studies. •A new hierarchical learning approach is designed to learn RTO and MPC strategies over different time scales. This method determines steady-state setpoints and lower-layer controllers, effectively eliminating the need for the repeated online calculations required in two-layer architectures.•The proposed method adopts a combination of offline training and online learning. It explicitly accounts for the impact of uncertainties on the two-layer structure, effectively enhancing the system’s adaptability to dynamic changes.•The proposed algorithm is validated in two representative chemical engineering case studies, demonstrating its potential for industrial process control applications.
AbstractList In process control, the integration of Real-Time Optimization (RTO) and Model Predictive Control (MPC) enables the system to achieve optimal control over both long-term and short-term horizons, thereby enhancing operational efficiency and economic performance. However, this integration still faces several challenges. In the two-layer structure, the upper layer RTO involves solving nonlinear programming problems with significant computational complexity, making it difficult to obtain feasible solutions in real-time within the limited optimization horizon. Simultaneously, the lower layer MPC must solve rolling optimization problems within a constrained time frame, placing higher demands on real-time performance. Additionally, uncertainties in the system affect both optimization and control performance. To address these issues, this paper proposes a noval hierarchical learning approach for RTO and MPC controller using reinforcement learning. This method learns the optimal strategies for RTO and MPC across different time scales, effectively mitigating the high computational costs associated with online computations. Through reward design and experience replay during the hierarchical learning process, efficient training of the upper and lower layer strategies is achieved. Offline training under various uncertainty scenarios, combined with online learning, effectively reduces performance degradation due to model uncertainties. The proposed approach demonstrates excellent performance in two representative chemical engineering case studies. •A new hierarchical learning approach is designed to learn RTO and MPC strategies over different time scales. This method determines steady-state setpoints and lower-layer controllers, effectively eliminating the need for the repeated online calculations required in two-layer architectures.•The proposed method adopts a combination of offline training and online learning. It explicitly accounts for the impact of uncertainties on the two-layer structure, effectively enhancing the system’s adaptability to dynamic changes.•The proposed algorithm is validated in two representative chemical engineering case studies, demonstrating its potential for industrial process control applications.
ArticleNumber 103559
Author Ren, Rui
Li, Shaoyuan
Author_xml – sequence: 1
  givenname: Rui
  surname: Ren
  fullname: Ren, Rui
– sequence: 2
  givenname: Shaoyuan
  surname: Li
  fullname: Li, Shaoyuan
  email: syli@sjtu.edu.cn
BookMark eNqFkM1qwzAQhHVIoUnaVyh6Aaf6iVT51hLSHwj00p6FLK0aGVsysgm0T1-ZtOecZhl2Z4dvhRYxRUDojpINJVTet5t2yMmmOG0YYaKYXIh6gZakFnVFBdteo9U4toQQ_sDkEuV9PJpoQ_zCxwDZZHsM1nS4A5Pj7CaPM5iumkIPOA1Fwo-ZQorYRIf75KDDQwYX7BROgOfXOXXYp1y2S-C8WvLKWKy-vIIbdOVNN8Ltn67R5_P-Y_daHd5f3nZPh8oySadKyC1YVjOveONUY51nW0YNd5LJRoFqPKmVU0z6WjZUcOoFl-AVQNNIaxhfI3nOtTmNYwavhxx6k781JXqmpVv9T0vPtPSZVjl8PB9CaXcqVPRoA5TmLmSwk3YpXIr4Ba-4fuI
Cites_doi 10.1016/j.compchemeng.2020.107077
10.1016/j.arcontrol.2008.03.003
10.1016/j.jprocont.2009.09.001
10.1007/s11081-019-09444-3
10.1016/j.compchemeng.2004.07.028
10.1016/j.compchemeng.2010.07.001
10.1016/j.compchemeng.2003.08.002
10.1016/j.compchemeng.2013.07.015
10.1016/j.compchemeng.2020.106886
10.1109/TAC.2020.3024161
10.1016/j.compchemeng.2019.05.029
10.1016/j.jprocont.2013.11.004
10.1146/annurev-control-090419-075625
10.1109/TAC.2007.908311
10.1016/j.automatica.2022.110598
10.1109/MSP.2017.2743240
10.1016/j.compchemeng.2024.108743
10.3390/pr11010123
10.1016/j.jprocont.2011.03.009
10.1007/s00170-021-07682-3
10.1016/j.jprocont.2023.04.003
10.3390/math7100890
10.1016/j.automatica.2014.10.128
10.1561/2600000023
10.1021/ie030563u
10.1021/acs.iecr.8b05327
10.1016/j.compchemeng.2018.03.021
10.1002/aic.690010422
10.1016/S1570-7946(09)70220-2
10.1109/TII.2020.2973721
10.1021/acs.iecr.0c01953
10.1109/TCST.2019.2949757
10.1016/j.automatica.2013.02.003
10.1021/acs.iecr.0c05678
ContentType Journal Article
Copyright 2025 Elsevier Ltd
Copyright_xml – notice: 2025 Elsevier Ltd
DBID AAYXX
CITATION
DOI 10.1016/j.jprocont.2025.103559
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
ExternalDocumentID 10_1016_j_jprocont_2025_103559
S0959152425001878
GroupedDBID --K
--M
.DC
.~1
0R~
1B1
1~.
1~5
29L
4.4
457
4G.
5GY
5VS
7-5
71M
8P~
9DU
9JN
AAEDT
AAEDW
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AATTM
AAXKI
AAXUO
AAYWO
ABFNM
ABFRF
ABJNI
ABMAC
ABNUV
ABWVN
ABXDB
ACDAQ
ACGFO
ACGFS
ACLOT
ACNNM
ACRLP
ACRPL
ACVFH
ADBBV
ADCNI
ADEWK
ADEZE
ADMUD
ADNMO
ADTZH
AEBSH
AECPX
AEFWE
AEIPS
AEKER
AENEX
AEUPX
AFJKZ
AFPUW
AFTJW
AGHFR
AGQPQ
AGUBO
AGYEJ
AHHHB
AHJVU
AHPOS
AIEXJ
AIGII
AIIUN
AIKHN
AITUG
AKBMS
AKRWK
AKURH
AKYEP
ALMA_UNASSIGNED_HOLDINGS
AMRAJ
ANKPU
APXCP
ASPBG
AVWKF
AXJTR
AZFZN
BBWZM
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFKBS
EFLBG
EJD
ENUVR
EO8
EO9
EP2
EP3
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
GBLVA
HLY
HVGLF
HZ~
IHE
J1W
JJJVA
KOM
LX7
LY7
M41
MO0
N9A
NDZJH
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
ROL
RPZ
SCE
SDF
SDG
SDP
SES
SET
SEW
SPC
SPCBC
SSG
SST
SSZ
T5K
UNMZH
WUQ
ZMT
ZY4
~G-
~HD
AAYXX
CITATION
ID FETCH-LOGICAL-c261t-564ec292f83bd8bcdf2421a3d626b8e8bf098d826f96b1531f536ef8eebb6ca23
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001590032600001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0959-1524
IngestDate Sat Nov 29 06:57:16 EST 2025
Sat Nov 29 17:01:58 EST 2025
IsPeerReviewed true
IsScholarly true
Keywords Real-time optimization
Model predictive control
Process control
Reinforcement learning
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c261t-564ec292f83bd8bcdf2421a3d626b8e8bf098d826f96b1531f536ef8eebb6ca23
ParticipantIDs crossref_primary_10_1016_j_jprocont_2025_103559
elsevier_sciencedirect_doi_10_1016_j_jprocont_2025_103559
PublicationCentury 2000
PublicationDate November 2025
2025-11-00
PublicationDateYYYYMMDD 2025-11-01
PublicationDate_xml – month: 11
  year: 2025
  text: November 2025
PublicationDecade 2020
PublicationTitle Journal of process control
PublicationYear 2025
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Cui, Chai, Liu (b34) 2020; 16
Adetola, Guay (b9) 2010; 20
Schwenzer, Ay, Bergs, Abel (b3) 2021; 117
Faria, Capron, de Souza, Secchi (b33) 2023; 11
Bilous, Amundson (b38) 1955; 1
Zhang, Xie, Su, Liu (b23) 2024
Darby, Nikolaou, Jones, Nicholson (b1) 2011; 21
Wu, Tran, Rincon, Christofides (b25) 2019; 65
Di Cairano, Kolmanovsky (b5) 2018
Biegler (b14) 2009; vol. 27
MacKinnon, Swartz (b17) 2023; 126
Arulkumaran, Deisenroth, Brundage, Bharath (b35) 2017; 34
Valluru, Patwardhan (b16) 2019; 58
de Avila Ferreira, Shukla, Faulwasser, Jones, Bonvin (b21) 2018
Mayne (b2) 2014; 50
Zhang, Wu, Rincon, Christofides (b22) 2019; 7
Chowdhary (b18) 2020
Galan, de Prada, Gutierrez, Sarabia, Grossmann, Gonzalez (b15) 2019; 20
de Carvalho, Alvarez (b40) 2020; 59
Hewing, Wabersich, Menner, Zeilinger (b20) 2020; 3
Bhat, Saraf (b8) 2004; 43
DeHaan, Guay (b10) 2007; 52
Tosukhowong, Lee, Lee, Lu (b13) 2004; 29
Jiang, Bian, Gao (b19) 2020; 8
Aswani, Gonzalez, Sastry, Tomlin (b24) 2013; 49
Kulkarni, Narasimhan, Saeedi, Tenenbaum (b36) 2016; 29
Amrit, Rawlings, Biegler (b39) 2013; 58
Zanon, Gros (b30) 2020; 66
Krishnamoorthy, Foss, Skogestad (b12) 2018; 115
Tatjewski (b7) 2008; 32
Lillicrap, Hunt, Pritzel, Heess, Erez, Tassa, Silver, Wierstra (b37) 2015
Bao, Zhu, Qian (b29) 2021; 60
Gros, Zanon (b31) 2022; 146
Powell, Machalek, Quah (b32) 2020; 143
De Souza, Odloak, Zanin (b4) 2010; 34
Hewing, Kabzan, Zeilinger (b26) 2019; 28
Nian, Liu, Huang (b28) 2020; 139
Marchetti, Ferramosca, González (b11) 2014; 24
Shin, Badgwell, Liu, Lee (b27) 2019; 127
Skogestad (b6) 2004; 28
Skogestad (10.1016/j.jprocont.2025.103559_b6) 2004; 28
Aswani (10.1016/j.jprocont.2025.103559_b24) 2013; 49
Arulkumaran (10.1016/j.jprocont.2025.103559_b35) 2017; 34
Shin (10.1016/j.jprocont.2025.103559_b27) 2019; 127
Jiang (10.1016/j.jprocont.2025.103559_b19) 2020; 8
Adetola (10.1016/j.jprocont.2025.103559_b9) 2010; 20
Tosukhowong (10.1016/j.jprocont.2025.103559_b13) 2004; 29
Cui (10.1016/j.jprocont.2025.103559_b34) 2020; 16
Hewing (10.1016/j.jprocont.2025.103559_b20) 2020; 3
de Carvalho (10.1016/j.jprocont.2025.103559_b40) 2020; 59
Darby (10.1016/j.jprocont.2025.103559_b1) 2011; 21
Wu (10.1016/j.jprocont.2025.103559_b25) 2019; 65
Zanon (10.1016/j.jprocont.2025.103559_b30) 2020; 66
Kulkarni (10.1016/j.jprocont.2025.103559_b36) 2016; 29
Hewing (10.1016/j.jprocont.2025.103559_b26) 2019; 28
Bhat (10.1016/j.jprocont.2025.103559_b8) 2004; 43
Faria (10.1016/j.jprocont.2025.103559_b33) 2023; 11
Krishnamoorthy (10.1016/j.jprocont.2025.103559_b12) 2018; 115
Powell (10.1016/j.jprocont.2025.103559_b32) 2020; 143
Bilous (10.1016/j.jprocont.2025.103559_b38) 1955; 1
Marchetti (10.1016/j.jprocont.2025.103559_b11) 2014; 24
Chowdhary (10.1016/j.jprocont.2025.103559_b18) 2020
Lillicrap (10.1016/j.jprocont.2025.103559_b37) 2015
Tatjewski (10.1016/j.jprocont.2025.103559_b7) 2008; 32
Gros (10.1016/j.jprocont.2025.103559_b31) 2022; 146
Di Cairano (10.1016/j.jprocont.2025.103559_b5) 2018
Galan (10.1016/j.jprocont.2025.103559_b15) 2019; 20
de Avila Ferreira (10.1016/j.jprocont.2025.103559_b21) 2018
De Souza (10.1016/j.jprocont.2025.103559_b4) 2010; 34
Biegler (10.1016/j.jprocont.2025.103559_b14) 2009; vol. 27
Zhang (10.1016/j.jprocont.2025.103559_b23) 2024
Bao (10.1016/j.jprocont.2025.103559_b29) 2021; 60
Amrit (10.1016/j.jprocont.2025.103559_b39) 2013; 58
DeHaan (10.1016/j.jprocont.2025.103559_b10) 2007; 52
Nian (10.1016/j.jprocont.2025.103559_b28) 2020; 139
MacKinnon (10.1016/j.jprocont.2025.103559_b17) 2023; 126
Valluru (10.1016/j.jprocont.2025.103559_b16) 2019; 58
Schwenzer (10.1016/j.jprocont.2025.103559_b3) 2021; 117
Mayne (10.1016/j.jprocont.2025.103559_b2) 2014; 50
Zhang (10.1016/j.jprocont.2025.103559_b22) 2019; 7
References_xml – volume: 58
  start-page: 7561
  year: 2019
  end-page: 7578
  ident: b16
  article-title: An integrated frequent RTO and adaptive nonlinear MPC scheme based on simultaneous Bayesian state and parameter estimation
  publication-title: Ind. Eng. Chem. Res.
– volume: 34
  start-page: 26
  year: 2017
  end-page: 38
  ident: b35
  article-title: Deep reinforcement learning: A brief survey
  publication-title: IEEE Signal Process. Mag.
– volume: 28
  start-page: 219
  year: 2004
  end-page: 234
  ident: b6
  article-title: Control structure design for complete chemical plants
  publication-title: Comput. Chem. Eng.
– volume: 65
  year: 2019
  ident: b25
  article-title: Machine learning-based predictive control of nonlinear processes. Part I: theory
  publication-title: AIChE J.
– volume: 139
  year: 2020
  ident: b28
  article-title: A review on reinforcement learning: Introduction and applications in industrial process control
  publication-title: Comput. Chem. Eng.
– volume: 8
  start-page: 176
  year: 2020
  end-page: 284
  ident: b19
  article-title: Learning-based control: A tutorial and some recent results
  publication-title: Found. Trends® Syst. Control.
– volume: 66
  start-page: 3638
  year: 2020
  end-page: 3652
  ident: b30
  article-title: Safe reinforcement learning using robust MPC
  publication-title: IEEE Trans. Autom. Control
– volume: 43
  start-page: 4323
  year: 2004
  end-page: 4336
  ident: b8
  article-title: Steady-state identification, gross error detection, and data reconciliation for industrial process units
  publication-title: Ind. Eng. Chem. Res.
– year: 2024
  ident: b23
  article-title: Data-driven auto-tuning strategy for RTO-MPC based on Bayesian optimization
  publication-title: Comput. Chem. Eng.
– volume: 59
  start-page: 15979
  year: 2020
  end-page: 15989
  ident: b40
  article-title: Simultaneous process design and control of the Williams–Otto reactor using infinite horizon model predictive control
  publication-title: Ind. Eng. Chem. Res.
– start-page: 2392
  year: 2018
  end-page: 2409
  ident: b5
  article-title: Real-time optimization and model predictive control for aerospace and automotive applications
  publication-title: 2018 Annual American Control Conference
– start-page: 465
  year: 2018
  end-page: 470
  ident: b21
  article-title: Real-time optimization of uncertain process systems via modifier adaptation and gaussian processes
  publication-title: 2018 European Control Conference
– volume: 52
  start-page: 2047
  year: 2007
  end-page: 2057
  ident: b10
  article-title: A real-time framework for model-predictive control of continuous-time nonlinear systems
  publication-title: IEEE Trans. Autom. Control
– volume: 126
  start-page: 12
  year: 2023
  end-page: 25
  ident: b17
  article-title: Robust closed-loop dynamic real-time optimization
  publication-title: J. Process Control
– volume: 29
  year: 2016
  ident: b36
  article-title: Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation
  publication-title: Adv. Neural Inf. Process. Syst.
– volume: 16
  start-page: 5905
  year: 2020
  end-page: 5913
  ident: b34
  article-title: Deep-neural-network-based economic model predictive control for ultrasupercritical power plant
  publication-title: IEEE Trans. Ind. Inform.
– volume: 127
  start-page: 282
  year: 2019
  end-page: 294
  ident: b27
  article-title: Reinforcement learning–overview of recent progress and implications for process control
  publication-title: Comput. Chem. Eng.
– volume: 34
  start-page: 1999
  year: 2010
  end-page: 2006
  ident: b4
  article-title: Real time optimization (RTO) with model predictive control (MPC)
  publication-title: Comput. Chem. Eng.
– volume: 115
  start-page: 34
  year: 2018
  end-page: 45
  ident: b12
  article-title: Steady-state real-time optimization using transient measurements
  publication-title: Comput. Chem. Eng.
– volume: 11
  start-page: 123
  year: 2023
  ident: b33
  article-title: One-layer real-time optimization using reinforcement learning: A review with guidelines
  publication-title: Processes
– volume: 29
  start-page: 199
  year: 2004
  end-page: 208
  ident: b13
  article-title: An introduction to a dynamic plant-wide optimization strategy for an integrated plant
  publication-title: Comput. Chem. Eng.
– volume: 28
  start-page: 2736
  year: 2019
  end-page: 2743
  ident: b26
  article-title: Cautious model predictive control using gaussian process regression
  publication-title: IEEE Trans. Control Syst. Technol.
– volume: 7
  start-page: 890
  year: 2019
  ident: b22
  article-title: Real-time optimization and control of nonlinear processes using machine learning
  publication-title: Mathematics
– volume: 146
  year: 2022
  ident: b31
  article-title: Learning for MPC with stability & safety guarantees
  publication-title: Automatica
– volume: 20
  start-page: 125
  year: 2010
  end-page: 133
  ident: b9
  article-title: Integration of real-time optimization and model predictive control
  publication-title: J. Process Control
– volume: 24
  start-page: 129
  year: 2014
  end-page: 145
  ident: b11
  article-title: Steady-state target optimization designs for integrating real-time optimization and model predictive control
  publication-title: J. Process Control
– volume: 3
  start-page: 269
  year: 2020
  end-page: 296
  ident: b20
  article-title: Learning-based model predictive control: Toward safe learning in control
  publication-title: Annu. Rev. Control. Robot. Auton. Syst.
– volume: 21
  start-page: 874
  year: 2011
  end-page: 884
  ident: b1
  article-title: RTO: An overview and assessment of current practice
  publication-title: J. Process Control
– volume: 60
  start-page: 5504
  year: 2021
  end-page: 5515
  ident: b29
  article-title: A deep reinforcement learning approach to improve the learning performance in process control
  publication-title: Ind. Eng. Chem. Res.
– volume: 58
  start-page: 334
  year: 2013
  end-page: 343
  ident: b39
  article-title: Optimizing process economics online using model predictive control
  publication-title: Comput. Chem. Eng.
– volume: 50
  start-page: 2967
  year: 2014
  end-page: 2986
  ident: b2
  article-title: Model predictive control: Recent developments and future promise
  publication-title: Automatica
– year: 2015
  ident: b37
  article-title: Continuous control with deep reinforcement learning
– volume: 32
  start-page: 71
  year: 2008
  end-page: 85
  ident: b7
  article-title: Advanced control and on-line process optimization in multilayer structures
  publication-title: Annu. Rev. Control.
– volume: 117
  start-page: 1327
  year: 2021
  end-page: 1349
  ident: b3
  article-title: Review on model predictive control: An engineering perspective
  publication-title: Int. J. Adv. Manuf. Technol.
– volume: 49
  start-page: 1216
  year: 2013
  end-page: 1226
  ident: b24
  article-title: Provably safe and robust learning-based model predictive control
  publication-title: Automatica
– volume: 20
  start-page: 1161
  year: 2019
  end-page: 1190
  ident: b15
  article-title: Implementation of RTO in a large hydrogen network considering uncertainty
  publication-title: Optim. Eng.
– year: 2020
  ident: b18
  article-title: Fundamentals of Artificial Intelligence
– volume: 143
  year: 2020
  ident: b32
  article-title: Real-time optimization using reinforcement learning
  publication-title: Comput. Chem. Eng.
– volume: vol. 27
  start-page: 1
  year: 2009
  end-page: 6
  ident: b14
  article-title: Technology advances for dynamic real-time optimization
  publication-title: Computer Aided Chemical Engineering
– volume: 1
  start-page: 513
  year: 1955
  end-page: 521
  ident: b38
  article-title: Chemical reactor stability and sensitivity
  publication-title: AIChE J.
– volume: 143
  year: 2020
  ident: 10.1016/j.jprocont.2025.103559_b32
  article-title: Real-time optimization using reinforcement learning
  publication-title: Comput. Chem. Eng.
  doi: 10.1016/j.compchemeng.2020.107077
– volume: 65
  issue: 11
  year: 2019
  ident: 10.1016/j.jprocont.2025.103559_b25
  article-title: Machine learning-based predictive control of nonlinear processes. Part I: theory
  publication-title: AIChE J.
– year: 2015
  ident: 10.1016/j.jprocont.2025.103559_b37
– volume: 32
  start-page: 71
  issue: 1
  year: 2008
  ident: 10.1016/j.jprocont.2025.103559_b7
  article-title: Advanced control and on-line process optimization in multilayer structures
  publication-title: Annu. Rev. Control.
  doi: 10.1016/j.arcontrol.2008.03.003
– start-page: 2392
  year: 2018
  ident: 10.1016/j.jprocont.2025.103559_b5
  article-title: Real-time optimization and model predictive control for aerospace and automotive applications
– volume: 20
  start-page: 125
  issue: 2
  year: 2010
  ident: 10.1016/j.jprocont.2025.103559_b9
  article-title: Integration of real-time optimization and model predictive control
  publication-title: J. Process Control
  doi: 10.1016/j.jprocont.2009.09.001
– volume: 20
  start-page: 1161
  issue: 4
  year: 2019
  ident: 10.1016/j.jprocont.2025.103559_b15
  article-title: Implementation of RTO in a large hydrogen network considering uncertainty
  publication-title: Optim. Eng.
  doi: 10.1007/s11081-019-09444-3
– volume: 29
  start-page: 199
  issue: 1
  year: 2004
  ident: 10.1016/j.jprocont.2025.103559_b13
  article-title: An introduction to a dynamic plant-wide optimization strategy for an integrated plant
  publication-title: Comput. Chem. Eng.
  doi: 10.1016/j.compchemeng.2004.07.028
– volume: 34
  start-page: 1999
  issue: 12
  year: 2010
  ident: 10.1016/j.jprocont.2025.103559_b4
  article-title: Real time optimization (RTO) with model predictive control (MPC)
  publication-title: Comput. Chem. Eng.
  doi: 10.1016/j.compchemeng.2010.07.001
– volume: 29
  year: 2016
  ident: 10.1016/j.jprocont.2025.103559_b36
  article-title: Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation
  publication-title: Adv. Neural Inf. Process. Syst.
– volume: 28
  start-page: 219
  issue: 1–2
  year: 2004
  ident: 10.1016/j.jprocont.2025.103559_b6
  article-title: Control structure design for complete chemical plants
  publication-title: Comput. Chem. Eng.
  doi: 10.1016/j.compchemeng.2003.08.002
– volume: 58
  start-page: 334
  year: 2013
  ident: 10.1016/j.jprocont.2025.103559_b39
  article-title: Optimizing process economics online using model predictive control
  publication-title: Comput. Chem. Eng.
  doi: 10.1016/j.compchemeng.2013.07.015
– start-page: 465
  year: 2018
  ident: 10.1016/j.jprocont.2025.103559_b21
  article-title: Real-time optimization of uncertain process systems via modifier adaptation and gaussian processes
– volume: 139
  year: 2020
  ident: 10.1016/j.jprocont.2025.103559_b28
  article-title: A review on reinforcement learning: Introduction and applications in industrial process control
  publication-title: Comput. Chem. Eng.
  doi: 10.1016/j.compchemeng.2020.106886
– volume: 66
  start-page: 3638
  issue: 8
  year: 2020
  ident: 10.1016/j.jprocont.2025.103559_b30
  article-title: Safe reinforcement learning using robust MPC
  publication-title: IEEE Trans. Autom. Control
  doi: 10.1109/TAC.2020.3024161
– volume: 127
  start-page: 282
  year: 2019
  ident: 10.1016/j.jprocont.2025.103559_b27
  article-title: Reinforcement learning–overview of recent progress and implications for process control
  publication-title: Comput. Chem. Eng.
  doi: 10.1016/j.compchemeng.2019.05.029
– volume: 24
  start-page: 129
  issue: 1
  year: 2014
  ident: 10.1016/j.jprocont.2025.103559_b11
  article-title: Steady-state target optimization designs for integrating real-time optimization and model predictive control
  publication-title: J. Process Control
  doi: 10.1016/j.jprocont.2013.11.004
– volume: 3
  start-page: 269
  year: 2020
  ident: 10.1016/j.jprocont.2025.103559_b20
  article-title: Learning-based model predictive control: Toward safe learning in control
  publication-title: Annu. Rev. Control. Robot. Auton. Syst.
  doi: 10.1146/annurev-control-090419-075625
– volume: 52
  start-page: 2047
  issue: 11
  year: 2007
  ident: 10.1016/j.jprocont.2025.103559_b10
  article-title: A real-time framework for model-predictive control of continuous-time nonlinear systems
  publication-title: IEEE Trans. Autom. Control
  doi: 10.1109/TAC.2007.908311
– volume: 146
  year: 2022
  ident: 10.1016/j.jprocont.2025.103559_b31
  article-title: Learning for MPC with stability & safety guarantees
  publication-title: Automatica
  doi: 10.1016/j.automatica.2022.110598
– volume: 34
  start-page: 26
  issue: 6
  year: 2017
  ident: 10.1016/j.jprocont.2025.103559_b35
  article-title: Deep reinforcement learning: A brief survey
  publication-title: IEEE Signal Process. Mag.
  doi: 10.1109/MSP.2017.2743240
– year: 2024
  ident: 10.1016/j.jprocont.2025.103559_b23
  article-title: Data-driven auto-tuning strategy for RTO-MPC based on Bayesian optimization
  publication-title: Comput. Chem. Eng.
  doi: 10.1016/j.compchemeng.2024.108743
– volume: 11
  start-page: 123
  issue: 1
  year: 2023
  ident: 10.1016/j.jprocont.2025.103559_b33
  article-title: One-layer real-time optimization using reinforcement learning: A review with guidelines
  publication-title: Processes
  doi: 10.3390/pr11010123
– volume: 21
  start-page: 874
  issue: 6
  year: 2011
  ident: 10.1016/j.jprocont.2025.103559_b1
  article-title: RTO: An overview and assessment of current practice
  publication-title: J. Process Control
  doi: 10.1016/j.jprocont.2011.03.009
– volume: 117
  start-page: 1327
  issue: 5
  year: 2021
  ident: 10.1016/j.jprocont.2025.103559_b3
  article-title: Review on model predictive control: An engineering perspective
  publication-title: Int. J. Adv. Manuf. Technol.
  doi: 10.1007/s00170-021-07682-3
– volume: 126
  start-page: 12
  year: 2023
  ident: 10.1016/j.jprocont.2025.103559_b17
  article-title: Robust closed-loop dynamic real-time optimization
  publication-title: J. Process Control
  doi: 10.1016/j.jprocont.2023.04.003
– volume: 7
  start-page: 890
  issue: 10
  year: 2019
  ident: 10.1016/j.jprocont.2025.103559_b22
  article-title: Real-time optimization and control of nonlinear processes using machine learning
  publication-title: Mathematics
  doi: 10.3390/math7100890
– volume: 50
  start-page: 2967
  issue: 12
  year: 2014
  ident: 10.1016/j.jprocont.2025.103559_b2
  article-title: Model predictive control: Recent developments and future promise
  publication-title: Automatica
  doi: 10.1016/j.automatica.2014.10.128
– volume: 8
  start-page: 176
  issue: 3
  year: 2020
  ident: 10.1016/j.jprocont.2025.103559_b19
  article-title: Learning-based control: A tutorial and some recent results
  publication-title: Found. Trends® Syst. Control.
  doi: 10.1561/2600000023
– volume: 43
  start-page: 4323
  issue: 15
  year: 2004
  ident: 10.1016/j.jprocont.2025.103559_b8
  article-title: Steady-state identification, gross error detection, and data reconciliation for industrial process units
  publication-title: Ind. Eng. Chem. Res.
  doi: 10.1021/ie030563u
– volume: 58
  start-page: 7561
  issue: 18
  year: 2019
  ident: 10.1016/j.jprocont.2025.103559_b16
  article-title: An integrated frequent RTO and adaptive nonlinear MPC scheme based on simultaneous Bayesian state and parameter estimation
  publication-title: Ind. Eng. Chem. Res.
  doi: 10.1021/acs.iecr.8b05327
– volume: 115
  start-page: 34
  year: 2018
  ident: 10.1016/j.jprocont.2025.103559_b12
  article-title: Steady-state real-time optimization using transient measurements
  publication-title: Comput. Chem. Eng.
  doi: 10.1016/j.compchemeng.2018.03.021
– volume: 1
  start-page: 513
  issue: 4
  year: 1955
  ident: 10.1016/j.jprocont.2025.103559_b38
  article-title: Chemical reactor stability and sensitivity
  publication-title: AIChE J.
  doi: 10.1002/aic.690010422
– volume: vol. 27
  start-page: 1
  year: 2009
  ident: 10.1016/j.jprocont.2025.103559_b14
  article-title: Technology advances for dynamic real-time optimization
  doi: 10.1016/S1570-7946(09)70220-2
– volume: 16
  start-page: 5905
  issue: 9
  year: 2020
  ident: 10.1016/j.jprocont.2025.103559_b34
  article-title: Deep-neural-network-based economic model predictive control for ultrasupercritical power plant
  publication-title: IEEE Trans. Ind. Inform.
  doi: 10.1109/TII.2020.2973721
– volume: 59
  start-page: 15979
  issue: 36
  year: 2020
  ident: 10.1016/j.jprocont.2025.103559_b40
  article-title: Simultaneous process design and control of the Williams–Otto reactor using infinite horizon model predictive control
  publication-title: Ind. Eng. Chem. Res.
  doi: 10.1021/acs.iecr.0c01953
– year: 2020
  ident: 10.1016/j.jprocont.2025.103559_b18
– volume: 28
  start-page: 2736
  issue: 6
  year: 2019
  ident: 10.1016/j.jprocont.2025.103559_b26
  article-title: Cautious model predictive control using gaussian process regression
  publication-title: IEEE Trans. Control Syst. Technol.
  doi: 10.1109/TCST.2019.2949757
– volume: 49
  start-page: 1216
  issue: 5
  year: 2013
  ident: 10.1016/j.jprocont.2025.103559_b24
  article-title: Provably safe and robust learning-based model predictive control
  publication-title: Automatica
  doi: 10.1016/j.automatica.2013.02.003
– volume: 60
  start-page: 5504
  issue: 15
  year: 2021
  ident: 10.1016/j.jprocont.2025.103559_b29
  article-title: A deep reinforcement learning approach to improve the learning performance in process control
  publication-title: Ind. Eng. Chem. Res.
  doi: 10.1021/acs.iecr.0c05678
SSID ssj0003726
Score 2.4532065
Snippet In process control, the integration of Real-Time Optimization (RTO) and Model Predictive Control (MPC) enables the system to achieve optimal control over both...
SourceID crossref
elsevier
SourceType Index Database
Publisher
StartPage 103559
SubjectTerms Model predictive control
Process control
Real-time optimization
Reinforcement learning
Title Enhancing hierarchical learning of real-time optimization and model predictive control for operational performance
URI https://dx.doi.org/10.1016/j.jprocont.2025.103559
Volume 155
WOSCitedRecordID wos001590032600001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  issn: 0959-1524
  databaseCode: AIEXJ
  dateStart: 19950101
  customDbUrl:
  isFulltext: true
  dateEnd: 99991231
  titleUrlDefault: https://www.sciencedirect.com
  omitProxy: false
  ssIdentifier: ssj0003726
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3NT9swFLcG7ACHsfEh2AD5sFuV0iZNYh_RVAQIoWkrUm9R_AWtIIlKi-C_59mx3XTqGBx2iaJIeXH8fnr--fl9IPQ95Yx0OI8CHgsa9ASlAeE5DRSPBKxXIex9THX9y_TqigyH9KcNG3sw7QTSoiBPT7T6r6qGZ6BsnTr7DnV7ofAA7kHpcAW1w_VNiu8Xt7qGRnHT0m2uzUGB1sOd84EAOwSieBforvKtEizGvU3FNOcIpjOOrhwgRsYS-lh2HY5YVnLinIfVPOPgLwS3qnMQnAR_tFMbul-zkQ8GMhEFv2_z8nlmwWr9EGFsE_IWHYrABnoLtjWOG9ax2wF2Q5ca7tqHMG6P9dhgWG39ifb8hcVK2X-sYD6u0IWsjTMnJ9NyslrOCloL05iC-V47Oe8PL_yKHaWmLZ__g0Ym-fIRLScxDWIy-Iw-2QnHJzUSvqAPsthCm65bB7bGewttNEpPbqOJhwluwgQ7mOBSYQ8T3IQJBphgAxM8hwm2SsaACdyACW7AZAddn_YHP84C238j4LCvngZx0pM8pKEiEROEcaF0_EAeCdgEMyIJUx1KBOxPFU0YrJxdFUeJVERKxhKeh9EuWi3KQu4hrJIuT1jIE13_LWc8T6OeyKUQTCZSqHwfHbsJzaq6zEr2ujL3EXXznlmyWJPADCD1j3e_vvtr39D6HPMHaHU6mclD9JE_TkcPkyOLpxfuF5c0
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Enhancing+hierarchical+learning+of+real-time+optimization+and+model+predictive+control+for+operational+performance&rft.jtitle=Journal+of+process+control&rft.au=Ren%2C+Rui&rft.au=Li%2C+Shaoyuan&rft.date=2025-11-01&rft.issn=0959-1524&rft.volume=155&rft.spage=103559&rft_id=info:doi/10.1016%2Fj.jprocont.2025.103559&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_jprocont_2025_103559
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0959-1524&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0959-1524&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0959-1524&client=summon