Visual-Tactile Fusion for Object Recognition

The camera provides rich visual information regarding objects and becomes one of the most mainstream sensors in the automation community. However, it is often difficult to be applicable when the objects are not visually distinguished. On the other hand, tactile sensors can be used to capture multipl...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on automation science and engineering Ročník 14; číslo 2; s. 996 - 1008
Hlavní autori: Huaping Liu, Yuanlong Yu, Fuchun Sun, Gu, Jason
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: IEEE 01.04.2017
Predmet:
ISSN:1545-5955, 1558-3783
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:The camera provides rich visual information regarding objects and becomes one of the most mainstream sensors in the automation community. However, it is often difficult to be applicable when the objects are not visually distinguished. On the other hand, tactile sensors can be used to capture multiple object properties, such as textures, roughness, spatial features, compliance, and friction, and therefore provide another important modality for the perception. Nevertheless, effective combination of the visual and tactile modalities is still a challenging problem. In this paper, we develop a visual-tactile fusion framework for object recognition tasks. This paper uses the multivariate-time-series model to represent the tactile sequence and the covariance descriptor to characterize the image. Further, we design a joint group kernel sparse coding (JGKSC) method to tackle the intrinsically weak pairing problem in visual-tactile data samples. Finally, we develop a visual-tactile data set, composed of 18 household objects for validation. The experimental results show that considering both visual and tactile inputs is beneficial and the proposed method indeed provides an effective strategy for fusion.
ISSN:1545-5955
1558-3783
DOI:10.1109/TASE.2016.2549552