Citace podle APA (7th ed.)

Liu, R., Wang, T., Cao, Y., & Xiong, L. (2024). PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps. Proceedings of the ... ACM Conference on Computer and Communications Security, 2024, 3511. https://doi.org/10.1145/3658644.3690279

Citace podle Chicago (17th ed.)

Liu, Ruixuan, Tianhao Wang, Yang Cao, a Li Xiong. "PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps." Proceedings of the ... ACM Conference on Computer and Communications Security 2024 (2024): 3511. https://doi.org/10.1145/3658644.3690279.

Citace podle MLA (9th ed.)

Liu, Ruixuan, et al. "PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps." Proceedings of the ... ACM Conference on Computer and Communications Security, vol. 2024, 2024, p. 3511, https://doi.org/10.1145/3658644.3690279.

Upozornění: Tyto citace jsou generovány automaticky. Nemusí být zcela správně podle citačních pravidel..