Point Cloud Projective Analysis for Part-Based Grasp Planning

This work presents an approach for part-based grasp planning in point clouds. A complete pipeline is proposed that allows a robot manipulator equipped with a range camera to perform object detection, categorization, segmentation into meaningful parts, and part-based semantic grasping. A supervised i...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE robotics and automation letters Ročník 5; číslo 3; s. 4695 - 4702
Hlavní autoři: Monica, Riccardo, Aleotti, Jacopo
Médium: Journal Article
Jazyk:angličtina
Vydáno: Piscataway IEEE 01.07.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:2377-3766, 2377-3766
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:This work presents an approach for part-based grasp planning in point clouds. A complete pipeline is proposed that allows a robot manipulator equipped with a range camera to perform object detection, categorization, segmentation into meaningful parts, and part-based semantic grasping. A supervised image-space technique is adopted for point cloud segmentation based on projective analysis. Projective analysis generates a set of 2D projections from the input object point cloud, labels each object projection by transferring knowledge from existing labeled images, and then fuses the labels by back-projection on the object point cloud. We introduce an algorithm for point cloud categorization based on 2D projections. We also propose a viewpoint aware algorithm that filters 2D projections according to the scanning path of the robot. Object categorization and segmentation experiments were carried out with both synthetic and real datasets. Results indicate that the proposed approach performs better than a CNN-based method for a training set of limited size. Finally, we show part-based grasping tasks in a real robotic setup.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2020.3003883