Visual-Tactile Fusion for Object Recognition

The camera provides rich visual information regarding objects and becomes one of the most mainstream sensors in the automation community. However, it is often difficult to be applicable when the objects are not visually distinguished. On the other hand, tactile sensors can be used to capture multipl...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on automation science and engineering Vol. 14; no. 2; pp. 996 - 1008
Main Authors: Huaping Liu, Yuanlong Yu, Fuchun Sun, Gu, Jason
Format: Journal Article
Language:English
Published: IEEE 01.04.2017
Subjects:
ISSN:1545-5955, 1558-3783
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The camera provides rich visual information regarding objects and becomes one of the most mainstream sensors in the automation community. However, it is often difficult to be applicable when the objects are not visually distinguished. On the other hand, tactile sensors can be used to capture multiple object properties, such as textures, roughness, spatial features, compliance, and friction, and therefore provide another important modality for the perception. Nevertheless, effective combination of the visual and tactile modalities is still a challenging problem. In this paper, we develop a visual-tactile fusion framework for object recognition tasks. This paper uses the multivariate-time-series model to represent the tactile sequence and the covariance descriptor to characterize the image. Further, we design a joint group kernel sparse coding (JGKSC) method to tackle the intrinsically weak pairing problem in visual-tactile data samples. Finally, we develop a visual-tactile data set, composed of 18 household objects for validation. The experimental results show that considering both visual and tactile inputs is beneficial and the proposed method indeed provides an effective strategy for fusion.
ISSN:1545-5955
1558-3783
DOI:10.1109/TASE.2016.2549552