PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps
The pre-training and fine-tuning paradigm has demonstrated its effectiveness and has become the standard approach for tailoring language models to various tasks. Currently, community-based platforms offer easy access to various pre-trained models, as anyone can publish without strict validation proc...
Gespeichert in:
| Veröffentlicht in: | Proceedings of the ... ACM Conference on Computer and Communications Security Jg. 2024; S. 3511 |
|---|---|
| Hauptverfasser: | , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
United States
01.10.2024
|
| Schlagworte: | |
| ISSN: | 1543-7221, 1543-7221 |
| Online-Zugang: | Weitere Angaben |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Schreiben Sie den ersten Kommentar!