More Than a Feeling: Learning to Grasp and Regrasp Using Vision and Touch

For humans, the process of grasping an object relies heavily on rich tactile feedback. Most recent robotic grasping work, however, has been based only on visual input, and thus cannot easily benefit from feedback after initiating contact. In this letter, we investigate how a robot can learn to use t...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE robotics and automation letters Ročník 3; číslo 4; s. 3300 - 3307
Hlavní autori: Calandra, Roberto, Owens, Andrew, Jayaraman, Dinesh, Lin, Justin, Wenzhen Yuan, Malik, Jitendra, Adelson, Edward H., Levine, Sergey
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Piscataway IEEE 01.10.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:2377-3766, 2377-3766
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:For humans, the process of grasping an object relies heavily on rich tactile feedback. Most recent robotic grasping work, however, has been based only on visual input, and thus cannot easily benefit from feedback after initiating contact. In this letter, we investigate how a robot can learn to use tactile information to iteratively and efficiently adjust its grasp. To this end, we propose an end-to-end action-conditional model that learns regrasping policies from raw visuo-tactile data. This model - a deep, multimodal convolutional network - predicts the outcome of a candidate grasp adjustment, and then executes a grasp by iteratively selecting the most promising actions. Our approach requires neither calibration of the tactile sensors nor any analytical modeling of contact forces, thus reducing the engineering effort required to obtain efficient grasping policies. We train our model with data from about 6450 grasping trials on a two-finger gripper equipped with GelSight high-resolution tactile sensors on each finger. Across extensive experiments, our approach outperforms a variety of baselines at 1) estimating grasp adjustment outcomes, 2) selecting efficient grasp adjustments for quick grasping, and 3) reducing the amount of force applied at the fingers, while maintaining competitive performance. Finally, we study the choices made by our model and show that it has successfully acquired useful and interpretable grasping behaviors.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2018.2852779