Perceptual Compression of Multimodal Tactile Signals with an Attention-Enhanced Autoencoder and Cross-Modal Psychohaptic Loss Function

This paper presents MPTC-Net, an autoencoder-based perceptual codec for multimodal tactile signals, capable of jointly compressing data across multiple tactile dimensions. Previous studies, including the state-of-the-art vibrotactile codecs standardized in IEEE 1918.1.1 and MPEG-I Haptics Coding, ha...

Full description

Saved in:
Bibliographic Details
Published in:IEEE World Haptics Conference (Online) pp. 147 - 153
Main Authors: Wei, Wenxuan, Xu, Xiao, Nockenberg, Lars, Rodriguez-Guevara, Daniel, Steinbach, Eckehard
Format: Conference Proceeding
Language:English
Published: IEEE 08.07.2025
Subjects:
ISSN:2835-9534
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper presents MPTC-Net, an autoencoder-based perceptual codec for multimodal tactile signals, capable of jointly compressing data across multiple tactile dimensions. Previous studies, including the state-of-the-art vibrotactile codecs standardized in IEEE 1918.1.1 and MPEG-I Haptics Coding, have primarily focused on roughness-related information, rather than jointly encoding multiple tactile dimensions. To address this limitation, we developed a Multimodal Psychohaptic Model (MPM) that incorporates the impact of multimodal stimulation on perceptual thresholds. The MPM is integrated into the loss function during training to enhance perceptual performance. Furthermore, an attention module is employed to extract critical information across modalities, and both early fusion and late fusion strategies are explored for improved multimodal integration. Our experimental results show significant improvements with the proposed codec, particularly in vibrotactile perceptual metrics, demonstrating its effectiveness in managing the complexity of multimodal tactile feedback.
ISSN:2835-9534
DOI:10.1109/WHC64065.2025.11123396