Combined MPC and reinforcement learning for traffic signal control in urban traffic networks

In general, the performance of model-based controllers cannot be guaranteed under model uncertainties or disturbances, while learning-based controllers require an extensively sufficient training process to perform well. These issues especially hold for large-scale nonlinear systems such as urban tra...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2022 26th International Conference on System Theory, Control and Computing (ICSTCC) s. 432 - 439
Hlavní autoři: Remmerswaal, Willemijn, Sun, Dingshan, Jamshidnejad, Anahita, De Schutter, Bart
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 19.10.2022
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In general, the performance of model-based controllers cannot be guaranteed under model uncertainties or disturbances, while learning-based controllers require an extensively sufficient training process to perform well. These issues especially hold for large-scale nonlinear systems such as urban traffic networks. In this paper, a new framework is proposed by combining model predictive control (MPC) and reinforcement learning (RL) to provide desired performance for urban traffic networks even during the learning process, despite model uncertainties and disturbances. MPC and RL complement each other very well, since MPC provides a sub-optimal and constraint-satisfying control input while RL provides adaptive control laws and can handle uncertainties and disturbances. The resulting combined framework is applied for traffic signal control (TSC) of an urban traffic network. A case study is carried out to compare the performance of the proposed framework and other baseline controllers. Results show that the proposed combined framework outperforms conventional control methods under system uncertainties, in terms of reducing traffic congestion.
DOI:10.1109/ICSTCC55426.2022.9931771