Yang, Y., Jin, Q., Huang, F., & Lu, Z. (2025). Adversarial prompt and fine-tuning attacks threaten medical large language models. Nature communications, 16(1), 9011-10. https://doi.org/10.1038/s41467-025-64062-1
Chicago-Zitierstil (17. Ausg.)Yang, Yifan, Qiao Jin, Furong Huang, und Zhiyong Lu. "Adversarial Prompt and Fine-tuning Attacks Threaten Medical Large Language Models." Nature Communications 16, no. 1 (2025): 9011-10. https://doi.org/10.1038/s41467-025-64062-1.
MLA-Zitierstil (9. Ausg.)Yang, Yifan, et al. "Adversarial Prompt and Fine-tuning Attacks Threaten Medical Large Language Models." Nature Communications, vol. 16, no. 1, 2025, pp. 9011-10, https://doi.org/10.1038/s41467-025-64062-1.
Achtung: Diese Zitate sind unter Umständen nicht zu 100% korrekt.