Perceptual Compression of Multimodal Tactile Signals with an Attention-Enhanced Autoencoder and Cross-Modal Psychohaptic Loss Function

This paper presents MPTC-Net, an autoencoder-based perceptual codec for multimodal tactile signals, capable of jointly compressing data across multiple tactile dimensions. Previous studies, including the state-of-the-art vibrotactile codecs standardized in IEEE 1918.1.1 and MPEG-I Haptics Coding, ha...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE World Haptics Conference (Online) s. 147 - 153
Hlavní autori: Wei, Wenxuan, Xu, Xiao, Nockenberg, Lars, Rodriguez-Guevara, Daniel, Steinbach, Eckehard
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 08.07.2025
Predmet:
ISSN:2835-9534
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:This paper presents MPTC-Net, an autoencoder-based perceptual codec for multimodal tactile signals, capable of jointly compressing data across multiple tactile dimensions. Previous studies, including the state-of-the-art vibrotactile codecs standardized in IEEE 1918.1.1 and MPEG-I Haptics Coding, have primarily focused on roughness-related information, rather than jointly encoding multiple tactile dimensions. To address this limitation, we developed a Multimodal Psychohaptic Model (MPM) that incorporates the impact of multimodal stimulation on perceptual thresholds. The MPM is integrated into the loss function during training to enhance perceptual performance. Furthermore, an attention module is employed to extract critical information across modalities, and both early fusion and late fusion strategies are explored for improved multimodal integration. Our experimental results show significant improvements with the proposed codec, particularly in vibrotactile perceptual metrics, demonstrating its effectiveness in managing the complexity of multimodal tactile feedback.
ISSN:2835-9534
DOI:10.1109/WHC64065.2025.11123396