Collaborative Deep Reinforcement Learning for Solving Multi-Objective Vehicle Routing Problems

Saved in:
Bibliographic Details
Title: Collaborative Deep Reinforcement Learning for Solving Multi-Objective Vehicle Routing Problems
Authors: WU, Yaoxin, FAN, Mingfeng, CAO, Zhiguang, GAO, Ruobin, HOU, Yaqing, SARTORETTI, Guillaume
Publisher Information: International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2024.
Publication Year: 2024
Subject Terms: Deep reinforcement learning, Artificial Intelligence and Robotics, Attention network, Collaborative active search, Multi-objective vehicle routing problems
Description: Existing deep reinforcement learning (DRL) methods for multi-objective vehicle routing problems (MOVRPs) typically decompose an MOVRP into subproblems with respective preferences and then train policies to solve corresponding subproblems. However, such a paradigm is still less effective in tackling the intricate interactions among subproblems, thus holding back the quality of the Pareto solutions. To counteract this limitation, we introduce a collaborative deep reinforcement learning method. We first propose a preference-based attention network (PAN) that allows the DRL agents to reason out solutions to subproblems in parallel, where a shared encoder learns the instance embedding and a decoder is tailored for each agent by preference intervention to construct respective solutions. Then, we design a collaborative active search (CAS) to further improve the solution quality, which updates only a part of the decoder parameters per instance during inference. In the CAS process, we also explicitly foster the interactions of neighboring DRL agents by imitation learning, empowering them to exchange insights of elite solutions to similar subproblems. Extensive results on random and benchmark instances verified the efficacy of PAN and CAS, which is particularly pronounced on the configurations (i.e., problem sizes or node distributions) beyond the training ones. Our code is available at https://github.com/marmotlab/PAN-CAS.
Document Type: Conference object
Article
File Description: application/pdf
Language: English
DOI: 10.5555/3635637.3663059
Access URL: https://research.tue.nl/en/publications/037d69a0-fcfc-4a88-84c1-0e75c1edb4b5
https://doi.org/10.5555/3635637.3663059
Rights: CC BY
CC BY NC ND
Accession Number: edsair.dedup.wf.002..06c43b3b16e71e0b2fd98cbb6d452cbc
Database: OpenAIRE
Description
Abstract:Existing deep reinforcement learning (DRL) methods for multi-objective vehicle routing problems (MOVRPs) typically decompose an MOVRP into subproblems with respective preferences and then train policies to solve corresponding subproblems. However, such a paradigm is still less effective in tackling the intricate interactions among subproblems, thus holding back the quality of the Pareto solutions. To counteract this limitation, we introduce a collaborative deep reinforcement learning method. We first propose a preference-based attention network (PAN) that allows the DRL agents to reason out solutions to subproblems in parallel, where a shared encoder learns the instance embedding and a decoder is tailored for each agent by preference intervention to construct respective solutions. Then, we design a collaborative active search (CAS) to further improve the solution quality, which updates only a part of the decoder parameters per instance during inference. In the CAS process, we also explicitly foster the interactions of neighboring DRL agents by imitation learning, empowering them to exchange insights of elite solutions to similar subproblems. Extensive results on random and benchmark instances verified the efficacy of PAN and CAS, which is particularly pronounced on the configurations (i.e., problem sizes or node distributions) beyond the training ones. Our code is available at https://github.com/marmotlab/PAN-CAS.
DOI:10.5555/3635637.3663059