Language Models as Zero-Shot Trajectory Generators

Large Language Models (LLMs) have recently shown promise as high-level planners for robots when given access to a selection of low-level skills. However, it is often assumed that LLMs do not possess sufficient knowledge to be used for the low-level trajectories themselves. In this work, we address t...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE robotics and automation letters Ročník 9; číslo 7; s. 6728 - 6735
Hlavní autoři: Kwon, Teyun, Palo, Norman Di, Johns, Edward
Médium: Journal Article
Jazyk:angličtina
Vydáno: Piscataway IEEE 01.07.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:2377-3766, 2377-3766
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Large Language Models (LLMs) have recently shown promise as high-level planners for robots when given access to a selection of low-level skills. However, it is often assumed that LLMs do not possess sufficient knowledge to be used for the low-level trajectories themselves. In this work, we address this assumption thoroughly, and investigate if an LLM (GPT-4) can directly predict a dense sequence of end-effector poses for manipulation tasks, when given access to only object detection and segmentation vision models. We designed a single, task-agnostic prompt, without any in-context examples, motion primitives, or external trajectory optimisers. Then we studied how well it can perform across 30 real-world language-based tasks, such as " open the bottle cap " and " wipe the plate with the sponge ", and we investigated which design choices in this prompt are the most important. Our conclusions raise the assumed limit of LLMs for robotics, and we reveal for the first time that LLMs do indeed possess an understanding of low-level robot control sufficient for a range of common tasks, and that they can additionally detect failures and then re-plan trajectories accordingly.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2024.3410155