PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps

The pre-training and fine-tuning paradigm has demonstrated its effectiveness and has become the standard approach for tailoring language models to various tasks. Currently, community-based platforms offer easy access to various pre-trained models, as anyone can publish without strict validation proc...

Full description

Saved in:
Bibliographic Details
Published in:Proceedings of the ... ACM Conference on Computer and Communications Security Vol. 2024; p. 3511
Main Authors: Liu, Ruixuan, Wang, Tianhao, Cao, Yang, Xiong, Li
Format: Journal Article
Language:English
Published: United States 01.10.2024
Subjects:
ISSN:1543-7221, 1543-7221
Online Access:Get more information
Tags: Add Tag
No Tags, Be the first to tag this record!
Be the first to leave a comment!
You must be logged in first