Zero-Shot Semantic Communication With Multimodal Foundation Models

Most existing semantic communication (SemCom) systems use deep joint source-channel coding (DeepJSCC) to encode task-specific semantics in a goal-oriented manner. However, their reliance on predefined tasks and datasets significantly limits their flexibility and generalizability in practical deploym...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on vehicular technology pp. 1 - 6
Main Authors: Hu, Jiangjing, Wu, Haotian, Zhang, Wenjing, Wang, Fengyu, Xu, Wenjun, Gao, Hui, Gunduz, Deniz
Format: Journal Article
Language:English
Published: IEEE 2025
Subjects:
ISSN:0018-9545, 1939-9359
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Most existing semantic communication (SemCom) systems use deep joint source-channel coding (DeepJSCC) to encode task-specific semantics in a goal-oriented manner. However, their reliance on predefined tasks and datasets significantly limits their flexibility and generalizability in practical deployments. Multi-modal foundation models provide a promising solution by generating universal semantic tokens. Inspired by this, in this paper, we propose SemCLIP, a zero-shot SemCom framework leveraging the contrastive language-image pre-training (CLIP) model. CLIP-generated image tokens are transmitted in SemCLIP under low bandwidth and challenging channel conditions, facilitating diverse zero-shot applications. Specifically, we propose a DeepJSCC scheme for efficient CLIP token encoding. To mitigate potential degradation caused by compression and channel noise, a multi-modal transmission-aware prompt learning (TAPL) mechanism is designed at the receiver, which adapts prompts based on transmission quality, enhancing system robustness and channel adaptability. Simulation results demonstrate that SemCLIP outperforms the baselines, achieving a 41% improvement in zero-shot performance at low signal-to-noise ratios. Meanwhile, SemCLIP reduces bandwidth usage by more than 50-fold compared to alternative image transmission methods, demonstrating the potential of foundation models towards a generalized, task-agnostic SemCom solution.
ISSN:0018-9545
1939-9359
DOI:10.1109/TVT.2025.3632893