Adversarial prompt and fine-tuning attacks threaten medical large language models

The integration of Large Language Models (LLMs) into healthcare applications offers promising advancements in medical diagnostics, treatment recommendations, and patient care. However, the susceptibility of LLMs to adversarial attacks poses a significant threat, potentially leading to harmful outcom...

Full description

Saved in:
Bibliographic Details
Published in:Nature communications Vol. 16; no. 1; pp. 9011 - 10
Main Authors: Yang, Yifan, Jin, Qiao, Huang, Furong, Lu, Zhiyong
Format: Journal Article
Language:English
Published: London Nature Publishing Group UK 09.10.2025
Nature Publishing Group
Nature Portfolio
Subjects:
ISSN:2041-1723, 2041-1723
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Be the first to leave a comment!
You must be logged in first