Adaptive Fusion Learning for Compositional Zero-Shot Recognition

Compositional Zero-Shot Learning (CZSL) aims to learn visual concepts (i.e., attributes and objects) from seen compositions and combine them to predict unseen compositions. Existing visual encoders in CZSL typically use traditional visual encoders (i.e., CNN and Transformer) or image encoders from V...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on multimedia Jg. 27; S. 1193 - 1204
Hauptverfasser: Min, Lingtong, Fan, Ziman, Wang, Shunzhou, Dou, Feiyang, Li, Xin, Wang, Binglu
Format: Journal Article
Sprache:Englisch
Veröffentlicht: IEEE 01.01.2025
Schlagworte:
ISSN:1520-9210, 1941-0077
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Compositional Zero-Shot Learning (CZSL) aims to learn visual concepts (i.e., attributes and objects) from seen compositions and combine them to predict unseen compositions. Existing visual encoders in CZSL typically use traditional visual encoders (i.e., CNN and Transformer) or image encoders from Visual-Language Models (VLMs) to encode image features. However, traditional visual encoders need more multi-modal textual information, and image encoders of VLMs exhibit dependence on pre-training data, making them less effective when used independently for predicting unseen compositions. To overcome this limitation, we propose a novel approach based on the joint modeling of traditional visual encoders and VLMs visual encoders to enhance the prediction ability for uncommon and unseen compositions. Specifically, we design an adaptive fusion module that automatically adjusts the weighted parameters of similarity scores between traditional and VLMs methods during training, and these weighted parameters are inherited during the inference process. Given the significance of disentangling attributes and objects, we design a Multi-Attribute Object Module that, during the training phase, incorporates multiple pairs of attributes and objects as prior knowledge, leveraging this rich prior knowledge to facilitate the disentanglement of attributes and objects. Building upon this, we select the text encoder from VLMs to construct the Adaptive Fusion Network. We conduct extensive experiments on the Clothing16 K, UT-Zappos50 K, and C-GQA datasets, achieving excellent performance on the Clothing16 K and UT-Zappos50 K datasets.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2024.3521852