An Admittance Parameter Optimization Method Based on Reinforcement Learning for Robot Force Control

When a robot performs tasks such as assembly or human–robot interaction, it is inevitable for it to collide with the unknown environment, resulting in potential safety hazards. In order to improve the compliance of robots to cope with unknown environments and enhance their intelligence in contact fo...

Full description

Saved in:
Bibliographic Details
Published in:Actuators Vol. 13; no. 9; p. 354
Main Authors: Hu, Xiaoyi, Liu, Gongping, Ren, Peipei, Jia, Bing, Liang, Yiwen, Li, Longxi, Duan, Shilin
Format: Journal Article
Language:English
Published: Basel MDPI AG 01.09.2024
Subjects:
ISSN:2076-0825, 2076-0825
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:When a robot performs tasks such as assembly or human–robot interaction, it is inevitable for it to collide with the unknown environment, resulting in potential safety hazards. In order to improve the compliance of robots to cope with unknown environments and enhance their intelligence in contact force-sensitive tasks, this paper proposes an improved admittance force control method, which combines classical adaptive control and machine learning methods to make them use their respective advantages in different stages of training and, ultimately, achieve better performance. In addition, this paper proposes an improved Deep Deterministic Policy Gradient (DDPG)-based optimizer, which is combined with the Gaussian process (GP) model to optimize the admittance parameters. In order to verify the feasibility of the algorithm, simulations and experiments are carried out in MATLAB and on a UR10e robot, respectively. The experimental results show that the algorithm improves the convergence speed by 33% in comparison to the general model-free learning method, and has better control performance and robustness. Finally, the adjustment time required by the algorithm is 44% shorter than that of classical adaptive admittance control.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2076-0825
2076-0825
DOI:10.3390/act13090354