Calibrating Multi-modal Representations: A Pursuit of Group Robustness without Annotations
Fine-tuning pre-trained vision-language models, like CLIP, has yielded success on diverse downstream tasks. However, several pain points persist for this paradigm: (i) directly tuning entire pre-trained models becomes both time-intensive and computationally costly. Additionally, these tuned models t...
Gespeichert in:
| Veröffentlicht in: | Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Online) Jg. 2024; S. 26140 - 26150 |
|---|---|
| Hauptverfasser: | , , , , , |
| Format: | Tagungsbericht Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
United States
IEEE
01.06.2024
|
| Schlagworte: | |
| ISSN: | 1063-6919, 1063-6919 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Schreiben Sie den ersten Kommentar!