Adversarial prompt and fine-tuning attacks threaten medical large language models

The integration of Large Language Models (LLMs) into healthcare applications offers promising advancements in medical diagnostics, treatment recommendations, and patient care. However, the susceptibility of LLMs to adversarial attacks poses a significant threat, potentially leading to harmful outcom...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Nature communications Ročník 16; číslo 1; s. 9011 - 10
Hlavní autoři: Yang, Yifan, Jin, Qiao, Huang, Furong, Lu, Zhiyong
Médium: Journal Article
Jazyk:angličtina
Vydáno: London Nature Publishing Group UK 09.10.2025
Nature Publishing Group
Nature Portfolio
Témata:
ISSN:2041-1723, 2041-1723
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The integration of Large Language Models (LLMs) into healthcare applications offers promising advancements in medical diagnostics, treatment recommendations, and patient care. However, the susceptibility of LLMs to adversarial attacks poses a significant threat, potentially leading to harmful outcomes in delicate medical contexts. This study investigates the vulnerability of LLMs to two types of adversarial attacks–prompt injections with malicious instructions and fine-tuning with poisoned samples–across three medical tasks: disease prevention, diagnosis, and treatment. Utilizing real-world patient data, we demonstrate that both open-source and proprietary LLMs are vulnerable to malicious manipulation across multiple tasks. We discover that while integrating poisoned data does not markedly degrade overall model performance on medical benchmarks, it can lead to noticeable shifts in fine-tuned model weights, suggesting a potential pathway for detecting and countering model attacks. This research highlights the urgent need for robust security measures and the development of defensive mechanisms to safeguard LLMs in medical applications, to ensure their safe and effective deployment in healthcare settings. Large language models hold significant potential in healthcare settings. This study exposes their vulnerability in medical applications and demonstrates the inadequacy of existing safeguards, highlighting the need for future studies to develop reliable methods for detecting and mitigating these risks.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2041-1723
2041-1723
DOI:10.1038/s41467-025-64062-1