PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps

The pre-training and fine-tuning paradigm has demonstrated its effectiveness and has become the standard approach for tailoring language models to various tasks. Currently, community-based platforms offer easy access to various pre-trained models, as anyone can publish without strict validation proc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Proceedings of the ... ACM Conference on Computer and Communications Security Jg. 2024; S. 3511
Hauptverfasser: Liu, Ruixuan, Wang, Tianhao, Cao, Yang, Xiong, Li
Format: Journal Article
Sprache:Englisch
Veröffentlicht: United States 01.10.2024
Schlagworte:
ISSN:1543-7221, 1543-7221
Online-Zugang:Weitere Angaben
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!