Adversarial prompt and fine-tuning attacks threaten medical large language models

The integration of Large Language Models (LLMs) into healthcare applications offers promising advancements in medical diagnostics, treatment recommendations, and patient care. However, the susceptibility of LLMs to adversarial attacks poses a significant threat, potentially leading to harmful outcom...

Full description

Saved in:
Bibliographic Details
Published in:Nature communications Vol. 16; no. 1; pp. 9011 - 10
Main Authors: Yang, Yifan, Jin, Qiao, Huang, Furong, Lu, Zhiyong
Format: Journal Article
Language:English
Published: London Nature Publishing Group UK 09.10.2025
Nature Publishing Group
Nature Portfolio
Subjects:
ISSN:2041-1723, 2041-1723
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract The integration of Large Language Models (LLMs) into healthcare applications offers promising advancements in medical diagnostics, treatment recommendations, and patient care. However, the susceptibility of LLMs to adversarial attacks poses a significant threat, potentially leading to harmful outcomes in delicate medical contexts. This study investigates the vulnerability of LLMs to two types of adversarial attacks–prompt injections with malicious instructions and fine-tuning with poisoned samples–across three medical tasks: disease prevention, diagnosis, and treatment. Utilizing real-world patient data, we demonstrate that both open-source and proprietary LLMs are vulnerable to malicious manipulation across multiple tasks. We discover that while integrating poisoned data does not markedly degrade overall model performance on medical benchmarks, it can lead to noticeable shifts in fine-tuned model weights, suggesting a potential pathway for detecting and countering model attacks. This research highlights the urgent need for robust security measures and the development of defensive mechanisms to safeguard LLMs in medical applications, to ensure their safe and effective deployment in healthcare settings. Large language models hold significant potential in healthcare settings. This study exposes their vulnerability in medical applications and demonstrates the inadequacy of existing safeguards, highlighting the need for future studies to develop reliable methods for detecting and mitigating these risks.
AbstractList The integration of Large Language Models (LLMs) into healthcare applications offers promising advancements in medical diagnostics, treatment recommendations, and patient care. However, the susceptibility of LLMs to adversarial attacks poses a significant threat, potentially leading to harmful outcomes in delicate medical contexts. This study investigates the vulnerability of LLMs to two types of adversarial attacks-prompt injections with malicious instructions and fine-tuning with poisoned samples-across three medical tasks: disease prevention, diagnosis, and treatment. Utilizing real-world patient data, we demonstrate that both open-source and proprietary LLMs are vulnerable to malicious manipulation across multiple tasks. We discover that while integrating poisoned data does not markedly degrade overall model performance on medical benchmarks, it can lead to noticeable shifts in fine-tuned model weights, suggesting a potential pathway for detecting and countering model attacks. This research highlights the urgent need for robust security measures and the development of defensive mechanisms to safeguard LLMs in medical applications, to ensure their safe and effective deployment in healthcare settings.The integration of Large Language Models (LLMs) into healthcare applications offers promising advancements in medical diagnostics, treatment recommendations, and patient care. However, the susceptibility of LLMs to adversarial attacks poses a significant threat, potentially leading to harmful outcomes in delicate medical contexts. This study investigates the vulnerability of LLMs to two types of adversarial attacks-prompt injections with malicious instructions and fine-tuning with poisoned samples-across three medical tasks: disease prevention, diagnosis, and treatment. Utilizing real-world patient data, we demonstrate that both open-source and proprietary LLMs are vulnerable to malicious manipulation across multiple tasks. We discover that while integrating poisoned data does not markedly degrade overall model performance on medical benchmarks, it can lead to noticeable shifts in fine-tuned model weights, suggesting a potential pathway for detecting and countering model attacks. This research highlights the urgent need for robust security measures and the development of defensive mechanisms to safeguard LLMs in medical applications, to ensure their safe and effective deployment in healthcare settings.
The integration of Large Language Models (LLMs) into healthcare applications offers promising advancements in medical diagnostics, treatment recommendations, and patient care. However, the susceptibility of LLMs to adversarial attacks poses a significant threat, potentially leading to harmful outcomes in delicate medical contexts. This study investigates the vulnerability of LLMs to two types of adversarial attacks-prompt injections with malicious instructions and fine-tuning with poisoned samples-across three medical tasks: disease prevention, diagnosis, and treatment. Utilizing real-world patient data, we demonstrate that both open-source and proprietary LLMs are vulnerable to malicious manipulation across multiple tasks. We discover that while integrating poisoned data does not markedly degrade overall model performance on medical benchmarks, it can lead to noticeable shifts in fine-tuned model weights, suggesting a potential pathway for detecting and countering model attacks. This research highlights the urgent need for robust security measures and the development of defensive mechanisms to safeguard LLMs in medical applications, to ensure their safe and effective deployment in healthcare settings.
The integration of Large Language Models (LLMs) into healthcare applications offers promising advancements in medical diagnostics, treatment recommendations, and patient care. However, the susceptibility of LLMs to adversarial attacks poses a significant threat, potentially leading to harmful outcomes in delicate medical contexts. This study investigates the vulnerability of LLMs to two types of adversarial attacks–prompt injections with malicious instructions and fine-tuning with poisoned samples–across three medical tasks: disease prevention, diagnosis, and treatment. Utilizing real-world patient data, we demonstrate that both open-source and proprietary LLMs are vulnerable to malicious manipulation across multiple tasks. We discover that while integrating poisoned data does not markedly degrade overall model performance on medical benchmarks, it can lead to noticeable shifts in fine-tuned model weights, suggesting a potential pathway for detecting and countering model attacks. This research highlights the urgent need for robust security measures and the development of defensive mechanisms to safeguard LLMs in medical applications, to ensure their safe and effective deployment in healthcare settings. Large language models hold significant potential in healthcare settings. This study exposes their vulnerability in medical applications and demonstrates the inadequacy of existing safeguards, highlighting the need for future studies to develop reliable methods for detecting and mitigating these risks.
Abstract The integration of Large Language Models (LLMs) into healthcare applications offers promising advancements in medical diagnostics, treatment recommendations, and patient care. However, the susceptibility of LLMs to adversarial attacks poses a significant threat, potentially leading to harmful outcomes in delicate medical contexts. This study investigates the vulnerability of LLMs to two types of adversarial attacks–prompt injections with malicious instructions and fine-tuning with poisoned samples–across three medical tasks: disease prevention, diagnosis, and treatment. Utilizing real-world patient data, we demonstrate that both open-source and proprietary LLMs are vulnerable to malicious manipulation across multiple tasks. We discover that while integrating poisoned data does not markedly degrade overall model performance on medical benchmarks, it can lead to noticeable shifts in fine-tuned model weights, suggesting a potential pathway for detecting and countering model attacks. This research highlights the urgent need for robust security measures and the development of defensive mechanisms to safeguard LLMs in medical applications, to ensure their safe and effective deployment in healthcare settings.
The integration of Large Language Models (LLMs) into healthcare applications offers promising advancements in medical diagnostics, treatment recommendations, and patient care. However, the susceptibility of LLMs to adversarial attacks poses a significant threat, potentially leading to harmful outcomes in delicate medical contexts. This study investigates the vulnerability of LLMs to two types of adversarial attacks–prompt injections with malicious instructions and fine-tuning with poisoned samples–across three medical tasks: disease prevention, diagnosis, and treatment. Utilizing real-world patient data, we demonstrate that both open-source and proprietary LLMs are vulnerable to malicious manipulation across multiple tasks. We discover that while integrating poisoned data does not markedly degrade overall model performance on medical benchmarks, it can lead to noticeable shifts in fine-tuned model weights, suggesting a potential pathway for detecting and countering model attacks. This research highlights the urgent need for robust security measures and the development of defensive mechanisms to safeguard LLMs in medical applications, to ensure their safe and effective deployment in healthcare settings.Large language models hold significant potential in healthcare settings. This study exposes their vulnerability in medical applications and demonstrates the inadequacy of existing safeguards, highlighting the need for future studies to develop reliable methods for detecting and mitigating these risks.
ArticleNumber 9011
Author Huang, Furong
Lu, Zhiyong
Yang, Yifan
Jin, Qiao
Author_xml – sequence: 1
  givenname: Yifan
  orcidid: 0000-0003-4414-9176
  surname: Yang
  fullname: Yang, Yifan
  organization: National Library of Medicine (NLM), National Institutes of Health (NIH), University of Maryland at College Park, Department of Computer Science
– sequence: 2
  givenname: Qiao
  orcidid: 0000-0002-1268-7239
  surname: Jin
  fullname: Jin, Qiao
  organization: National Library of Medicine (NLM), National Institutes of Health (NIH)
– sequence: 3
  givenname: Furong
  surname: Huang
  fullname: Huang, Furong
  organization: University of Maryland at College Park, Department of Computer Science
– sequence: 4
  givenname: Zhiyong
  orcidid: 0000-0001-9998-916X
  surname: Lu
  fullname: Lu, Zhiyong
  email: zhiyong.lu@nih.gov
  organization: National Library of Medicine (NLM), National Institutes of Health (NIH)
BackLink https://www.ncbi.nlm.nih.gov/pubmed/41068092$$D View this record in MEDLINE/PubMed
BookMark eNp9kctu1TAQhi1URMuhL8ACRWLDJuDLOJdlVXGpVAkhwdqa2OOQQ-Ic7KRS3x6fk1IQC7ywR9Y3_1z-5-wszIEYeyn4W8FV8y6BgKouudRlBbySpXjCLiQHUYpaqrO_4nN2mdKe56Na0QA8Y-cgeNXwVl6wL1fujmLCOOBYHOI8HZYCgyv8EKhc1jCEvsBlQfsjFcv3SLhQKCZyg838iLGnfId-xRxMs6MxvWBPPY6JLh_eHfv24f3X60_l7eePN9dXt6UF1SwltpzqqhOdACfRK4tOi9bWjURsQIF2mrSsoe281K3ytRXE66ZTHrh3jqsdu9l03Yx7c4jDhPHezDiY08cce4NxGexIRlkpK-0VcSIACx1acCAa3RIhiCprvdm08gZ-rpQWMw3J0phHo3lNRuUWGl21uZEde_0Pup_XGPKkJwqynDxSrx6otcvbemzv9-IzIDfAxjmlSP4REdwcDTabwSYbbE4GG5GT1JaUMhx6in9q_yfrF2Pnpl0
Cites_doi 10.48550/arXiv.2307.15043
10.48550/arXiv.2411.11525
10.1136/jme-2023-109549
10.1093/bioinformatics/btae075
10.1038/sdata.2016.35
10.48550/arXiv.2404.15777
10.48550/arXiv.2310.15140
10.1109/ICCV51070.2023.00412
10.4174/astr.2023.104.5.269
10.18653/v1/2024.naacl-long.171
10.3389/frai.2023.1169595
10.1093/bib/bbad493
10.48550/arXiv.2312.10997
10.1109/ICCVW54120.2021.00008
10.48550/arXiv.2307.09288
10.1136/bjo-2023-324438
10.2196/53724
10.48550/arXiv.2303.08774
10.48550/arXiv.1909.06146
10.48550/arXiv.2411.18940
10.1038/s41597-023-02814-8
10.48550/arXiv.2306.05499
10.1038/s41467-024-53081-z
10.3390/app11146421
10.18653/v1/W19-5003
10.1007/978-3-031-26553-2_22
ContentType Journal Article
Copyright This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply 2025
2025. This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply.
This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply 2025
– notice: 2025. This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply.
– notice: This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID C6C
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
3V.
7QL
7QP
7QR
7SN
7SS
7ST
7T5
7T7
7TM
7TO
7X7
7XB
88E
8AO
8FD
8FE
8FG
8FH
8FI
8FJ
8FK
ABUWG
AEUYN
AFKRA
ARAPS
AZQEC
BBNVY
BENPR
BGLVJ
BHPHI
C1K
CCPQU
COVID
DWQXO
FR3
FYUFA
GHDGH
GNUQQ
H94
HCIFZ
K9.
LK8
M0S
M1P
M7P
P5Z
P62
P64
PHGZM
PHGZT
PIMPY
PJZUB
PKEHL
PPXIY
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
RC3
SOI
7X8
DOA
DOI 10.1038/s41467-025-64062-1
DatabaseName Springer Nature OA Free Journals (WRLC)
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
ProQuest Central (Corporate)
Bacteriology Abstracts (Microbiology B)
Calcium & Calcified Tissue Abstracts
Chemoreception Abstracts
Ecology Abstracts
Entomology Abstracts (Full archive)
Environment Abstracts
Immunology Abstracts
Industrial and Applied Microbiology Abstracts (Microbiology A)
Nucleic Acids Abstracts
Oncogenes and Growth Factors Abstracts
Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Medical Database (Alumni Edition)
ProQuest Pharma Collection
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Natural Science Collection
ProQuest Hospital Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest One Sustainability
ProQuest Central UK/Ireland
Advanced Technologies & Computer Science Collection
ProQuest Central Essentials
Biological Science Collection
ProQuest Central
Technology Collection
Natural Science Collection
Environmental Sciences and Pollution Management
ProQuest One
Coronavirus Research Database
ProQuest Central Korea
Engineering Research Database
Proquest Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Central Student
AIDS and Cancer Research Abstracts
SciTech Premium Collection
ProQuest Health & Medical Complete (Alumni)
Biological Sciences
ProQuest Health & Medical Collection
Medical Database
Biological Science Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
Biotechnology and BioEngineering Abstracts
Proquest Central Premium
ProQuest One Academic (New)
Publicly Available Content Database
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
Genetics Abstracts
Environment Abstracts
MEDLINE - Academic
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Publicly Available Content Database
ProQuest Central Student
Oncogenes and Growth Factors Abstracts
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
Nucleic Acids Abstracts
SciTech Premium Collection
ProQuest Central China
Environmental Sciences and Pollution Management
ProQuest One Applied & Life Sciences
ProQuest One Sustainability
Health Research Premium Collection
Natural Science Collection
Health & Medical Research Collection
Biological Science Collection
Chemoreception Abstracts
Industrial and Applied Microbiology Abstracts (Microbiology A)
ProQuest Central (New)
ProQuest Medical Library (Alumni)
Advanced Technologies & Aerospace Collection
ProQuest Biological Science Collection
ProQuest One Academic Eastern Edition
Coronavirus Research Database
ProQuest Hospital Collection
ProQuest Technology Collection
Health Research Premium Collection (Alumni)
Biological Science Database
Ecology Abstracts
ProQuest Hospital Collection (Alumni)
Biotechnology and BioEngineering Abstracts
Entomology Abstracts
ProQuest Health & Medical Complete
ProQuest One Academic UKI Edition
Engineering Research Database
ProQuest One Academic
Calcium & Calcified Tissue Abstracts
ProQuest One Academic (New)
Technology Collection
Technology Research Database
ProQuest One Academic Middle East (New)
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
ProQuest One Community College
ProQuest One Health & Nursing
ProQuest Natural Science Collection
ProQuest Pharma Collection
ProQuest Central
ProQuest Health & Medical Research Collection
Genetics Abstracts
Health and Medicine Complete (Alumni Edition)
ProQuest Central Korea
Bacteriology Abstracts (Microbiology B)
AIDS and Cancer Research Abstracts
ProQuest SciTech Collection
Advanced Technologies & Aerospace Database
ProQuest Medical Library
Immunology Abstracts
Environment Abstracts
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic
MEDLINE


Publicly Available Content Database
CrossRef
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: PIMPY
  name: Publicly Available Content Database
  url: http://search.proquest.com/publiccontent
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Biology
EISSN 2041-1723
EndPage 10
ExternalDocumentID oai_doaj_org_article_3c2265f3e0ee44c4bac4d41859eea416
41068092
10_1038_s41467_025_64062_1
Genre Journal Article
GrantInformation_xml – fundername: U.S. Department of Health & Human Services | NIH | U.S. National Library of Medicine (NLM)
  funderid: https://doi.org/10.13039/100000092
GroupedDBID ---
0R~
39C
53G
5VS
70F
7X7
88E
8AO
8FE
8FG
8FH
8FI
8FJ
AAHBH
AAJSJ
AASML
ABUWG
ACGFO
ACGFS
ACIWK
ACMJI
ACPRK
ADBBV
ADFRT
ADMLS
ADRAZ
AENEX
AEUYN
AFKRA
AFRAH
AHMBA
ALMA_UNASSIGNED_HOLDINGS
AMTXH
AOIJS
ARAPS
ASPBG
AVWKF
AZFZN
BBNVY
BCNDV
BENPR
BGLVJ
BHPHI
BPHCQ
BVXVI
C6C
CCPQU
DIK
EBLON
EBS
EE.
EMOBN
F5P
FEDTE
FYUFA
GROUPED_DOAJ
HCIFZ
HMCUK
HVGLF
HYE
HZ~
KQ8
LGEZI
LK8
LOTEE
M1P
M7P
M~E
NADUK
NAO
NXXTH
O9-
OK1
P2P
P62
PHGZM
PHGZT
PIMPY
PJZUB
PPXIY
PQGLB
PQQKQ
PROAC
PSQYO
RNS
RNT
RNTTT
RPM
SNYQT
SV3
TSG
UKHRP
AAYXX
AFFHD
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
3V.
7QL
7QP
7QR
7SN
7SS
7ST
7T5
7T7
7TM
7TO
7XB
8FD
8FK
AZQEC
C1K
COVID
DWQXO
FR3
GNUQQ
H94
K9.
M48
P64
PKEHL
PQEST
PQUKI
PRINS
RC3
SOI
7X8
ID FETCH-LOGICAL-c438t-a90e76b1b14d2af3cad519c782aa84345d5e52749bf2593f7c1e078b3f40fdd03
IEDL.DBID P5Z
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001591591400008&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2041-1723
IngestDate Mon Oct 13 19:21:11 EDT 2025
Sat Oct 11 06:37:21 EDT 2025
Sat Oct 11 05:36:48 EDT 2025
Mon Oct 13 01:40:50 EDT 2025
Sat Nov 29 07:20:37 EST 2025
Fri Oct 10 01:14:31 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 1
Language English
License 2025. This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c438t-a90e76b1b14d2af3cad519c782aa84345d5e52749bf2593f7c1e078b3f40fdd03
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0001-9998-916X
0000-0002-1268-7239
0000-0003-4414-9176
OpenAccessLink https://www.proquest.com/docview/3259441629?pq-origsite=%requestingapplication%
PMID 41068092
PQID 3259441629
PQPubID 546298
PageCount 10
ParticipantIDs doaj_primary_oai_doaj_org_article_3c2265f3e0ee44c4bac4d41859eea416
proquest_miscellaneous_3259856959
proquest_journals_3259441629
pubmed_primary_41068092
crossref_primary_10_1038_s41467_025_64062_1
springer_journals_10_1038_s41467_025_64062_1
PublicationCentury 2000
PublicationDate 2025-10-09
PublicationDateYYYYMMDD 2025-10-09
PublicationDate_xml – month: 10
  year: 2025
  text: 2025-10-09
  day: 09
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
– name: England
PublicationTitle Nature communications
PublicationTitleAbbrev Nat Commun
PublicationTitleAlternate Nat Commun
PublicationYear 2025
Publisher Nature Publishing Group UK
Nature Publishing Group
Nature Portfolio
Publisher_xml – name: Nature Publishing Group UK
– name: Nature Publishing Group
– name: Nature Portfolio
References M Balas (64062_CR9) 2024; 50
64062_CR20
AEW Johnson (64062_CR18) 2016; 3
S Tian (64062_CR2) 2024; 25
WHK Chiu (64062_CR8) 2024; 26
D Jin (64062_CR21) 2021; 11
64062_CR13
64062_CR35
64062_CR14
64062_CR36
64062_CR11
64062_CR33
64062_CR12
Z Zhao (64062_CR19) 2023; 10
64062_CR34
64062_CR17
64062_CR15
64062_CR37
64062_CR16
64062_CR31
64062_CR10
64062_CR32
64062_CR30
N Oh (64062_CR6) 2023; 104
Q Jin (64062_CR4) 2024; 40
64062_CR24
Q Jin (64062_CR3) 2024; 15
64062_CR25
64062_CR22
64062_CR1
64062_CR23
64062_CR28
T Dave (64062_CR7) 2023; 6
64062_CR29
64062_CR26
64062_CR5
64062_CR27
References_xml – ident: 64062_CR17
  doi: 10.48550/arXiv.2307.15043
– ident: 64062_CR31
  doi: 10.48550/arXiv.2411.11525
– ident: 64062_CR27
– ident: 64062_CR23
– volume: 50
  start-page: 90
  year: 2024
  ident: 64062_CR9
  publication-title: J. Med. Ethics
  doi: 10.1136/jme-2023-109549
– volume: 40
  year: 2024
  ident: 64062_CR4
  publication-title: Bioinformatics
  doi: 10.1093/bioinformatics/btae075
– ident: 64062_CR29
– volume: 3
  year: 2016
  ident: 64062_CR18
  publication-title: Sci. Data
  doi: 10.1038/sdata.2016.35
– ident: 64062_CR37
– ident: 64062_CR33
– ident: 64062_CR5
  doi: 10.48550/arXiv.2404.15777
– ident: 64062_CR14
– ident: 64062_CR16
  doi: 10.48550/arXiv.2310.15140
– ident: 64062_CR32
  doi: 10.1109/ICCV51070.2023.00412
– volume: 104
  start-page: 269
  year: 2023
  ident: 64062_CR6
  publication-title: Ann. Surg. Treat. Res.
  doi: 10.4174/astr.2023.104.5.269
– ident: 64062_CR15
  doi: 10.18653/v1/2024.naacl-long.171
– volume: 6
  year: 2023
  ident: 64062_CR7
  publication-title: Front. Artif. Intell.
  doi: 10.3389/frai.2023.1169595
– volume: 25
  year: 2024
  ident: 64062_CR2
  publication-title: Brief. Bioinform.
  doi: 10.1093/bib/bbad493
– ident: 64062_CR11
  doi: 10.48550/arXiv.2312.10997
– ident: 64062_CR26
– ident: 64062_CR35
  doi: 10.1109/ICCVW54120.2021.00008
– ident: 64062_CR20
  doi: 10.48550/arXiv.2307.09288
– ident: 64062_CR30
– ident: 64062_CR10
  doi: 10.1136/bjo-2023-324438
– volume: 26
  start-page: e53724
  year: 2024
  ident: 64062_CR8
  publication-title: J. Med. Internet Res.
  doi: 10.2196/53724
– ident: 64062_CR1
  doi: 10.48550/arXiv.2303.08774
– ident: 64062_CR28
– ident: 64062_CR22
  doi: 10.48550/arXiv.1909.06146
– ident: 64062_CR25
  doi: 10.48550/arXiv.2411.18940
– volume: 10
  year: 2023
  ident: 64062_CR19
  publication-title: Sci. Data
  doi: 10.1038/s41597-023-02814-8
– ident: 64062_CR36
– ident: 64062_CR12
  doi: 10.48550/arXiv.2306.05499
– volume: 15
  year: 2024
  ident: 64062_CR3
  publication-title: Nat. Commun.
  doi: 10.1038/s41467-024-53081-z
– ident: 64062_CR13
– volume: 11
  start-page: 6421
  year: 2021
  ident: 64062_CR21
  publication-title: Appl. Sci.
  doi: 10.3390/app11146421
– ident: 64062_CR24
  doi: 10.18653/v1/W19-5003
– ident: 64062_CR34
  doi: 10.1007/978-3-031-26553-2_22
SSID ssj0000391844
Score 2.4845302
Snippet The integration of Large Language Models (LLMs) into healthcare applications offers promising advancements in medical diagnostics, treatment recommendations,...
Abstract The integration of Large Language Models (LLMs) into healthcare applications offers promising advancements in medical diagnostics, treatment...
SourceID doaj
proquest
pubmed
crossref
springer
SourceType Open Website
Aggregation Database
Index Database
Publisher
StartPage 9011
SubjectTerms 631/114/1305
692/308
Benchmarks
Computer Security
Delivery of Health Care
Disease prevention
Health care
Health services
Humanities and Social Sciences
Humans
Immunization
Language
Large Language Models
Medical imaging
multidisciplinary
Patients
Science
Science (multidisciplinary)
Ultrasonic imaging
Vaccines
X-rays
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Ni9YwEB5kUfAifltdJYI3Ldsmk7Y5qrh4kEVBZW8hH1MU3O6y7bvgv3eS9H1dUfHitQ0hmUkyz2Qm8wA8I9lGNNHXjN66GiNvqUGTq4NXTnuKrskv5D6_64-OhuNj8_4S1VfKCSvlgYvgDlRggKBHRQ0RYkDvAsZUccUQOUYT6fRl1HPJmcpnsDLsuuD6SqZRw8GM-UxI7K0dGzFZt79Yolyw_08o87cIaTY8hzfhxooYxcsy0ltwhabbcK1wSH6_Ax8ypfLs0kIS3O_J2SLcFMXI8LFeNunaQ7hlSU_pxfIlQUSaxEkJz4hvKQ9cbO8sRabFme_Cp8M3H1-_rVeehDqgGpbamYb6zre-xSjdqIKLjMsC237nBlSooybN3qfxIzs7auxDS4wMvBqxGWNs1D3Ym04negBCBtk3PkpklIFylCbo3vmATVBdpA4reL6VmT0r5TBsDmOrwRYJW5awzRK2bQWvklh3LVMp6_yBFWxXBdt_KbiC_a1S7Lq_Zqt4IgzkOmkqeLr7zTsjhTvcRKeb0mbQndHc5n5R5m4k2CbOESMreLHV7s_O_z6hh_9jQo_gukzLMGUhmH3YW8439Biuhovl63z-JK_jHykA9DA
  priority: 102
  providerName: Directory of Open Access Journals
Title Adversarial prompt and fine-tuning attacks threaten medical large language models
URI https://link.springer.com/article/10.1038/s41467-025-64062-1
https://www.ncbi.nlm.nih.gov/pubmed/41068092
https://www.proquest.com/docview/3259441629
https://www.proquest.com/docview/3259856959
https://doaj.org/article/3c2265f3e0ee44c4bac4d41859eea416
Volume 16
WOSCitedRecordID wos001591591400008&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 2041-1723
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000391844
  issn: 2041-1723
  databaseCode: DOA
  dateStart: 20150101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 2041-1723
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000391844
  issn: 2041-1723
  databaseCode: M~E
  dateStart: 20100101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
– providerCode: PRVPQU
  databaseName: Advanced Technologies & Aerospace Database
  customDbUrl:
  eissn: 2041-1723
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000391844
  issn: 2041-1723
  databaseCode: P5Z
  dateStart: 20100101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/hightechjournals
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Biological Science Database
  customDbUrl:
  eissn: 2041-1723
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000391844
  issn: 2041-1723
  databaseCode: M7P
  dateStart: 20100101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/biologicalscijournals
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Health & Medical Collection
  customDbUrl:
  eissn: 2041-1723
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000391844
  issn: 2041-1723
  databaseCode: 7X7
  dateStart: 20100101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/healthcomplete
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 2041-1723
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000391844
  issn: 2041-1723
  databaseCode: BENPR
  dateStart: 20100101
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Publicly Available Content Database
  customDbUrl:
  eissn: 2041-1723
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000391844
  issn: 2041-1723
  databaseCode: PIMPY
  dateStart: 20100101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/publiccontent
  providerName: ProQuest
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwEB7RFiQuvEsDZWUkbhA18SOJT4iiViDBKiBACxfLrwASzW43WST-PR4n2QrxuHDJIbES2zNjf5kZzwfwyNPccelMGtBbkXIXTKoSXqfWMC2MdzqLJ-Q-vCrn82qxkPXocOvGtMppTYwLtVta9JEfsYDTw9ZdUPl0dZ4iaxRGV0cKjR3YwyoJaJi1-LT1sWD184rz8axMxqqjjseVATlci7CV0TT_ZT-KZfv_hDV_i5PG7ef0-v92_AZcG4EneTZoyk245NtbcGWgovxxG95EZuZOoz6S8IWzVU9060gTUGjab9B7QnTf44l80n9BpOlbcjZEecg3TCcnk-uTRHad7g68Pz159_xFOtItpJazqk-1zHxZmNzk3FHdMKtdgHc2QAitK864cMKL8BMrTROGxJrS5j4ADMManjXOZWwfdttl6w-AUEvLzDjKA1jhtKHSilIbyzPLCucLnsDjadLVaqiqoWI0nFVqEJEKIlJRRCpP4Bjlsm2JFbHjjeX6sxoNTDEbgKRomM-859xyoy13WJlHeq_D9CdwOIlHjWbaqQvZJPBw-zgYGEZNdOuXm6FNJQopQpu7gzZse8JzpC6RNIEnk3pcvPzvA7r3777ch6sUNRTTFOQh7PbrjX8Al-33_mu3nsFOuSjjtZrB3vHJvH47i56EGeat1rNoAuFJ_fJ1_fEnKicJEA
linkProvider ProQuest
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9QwEB6VAoIL70eggJHgBFETP5L4gBCvqlWXFUgF7c34FUCi2WWTBfVP8RsZO8lWiMetB66J5cTxNzNfPPZ8AA88zR2XzqTI3oqUOzSpSnidWsO0MN7pLJ6Qez8pp9NqNpNvNuDHeBYmbKscfWJ01G5uwxr5NkOejqG7oPLp4msaVKNCdnWU0Ohhse-PvuMvW_tk7yXO70NKd14dvNhNB1WB1HJWdamWmS8Lk5ucO6prZrVDFmMxUmpdccaFE17gv5o0NT6S1aXNPcZRw2qe1c5lDPs9BafRj5dhC1k5K9drOqHaesX5cDYnY9V2y6MnCpqxBYZOmua_xL8oE_AnbvtbXjaGu52L_9uHugQXBmJNnvWWcBk2fHMFzvZSm0dX4W1Unm51sDeCIzpcdEQ3jtTIstNuFVaHiO66UHGAdJ8Ck_YNOeyzWORL2C5PxqVdEtWD2mvw7kTGcx02m3njbwKhlpaZcZQjGeO0ptKKUhvLM8sK5wuewKNxktWirxqiYrafVaqHhEJIqAgJlSfwPOBg3TJU_I4X5suPanAgilkkyqJmPvOec8uNttyFykPSe43TncDWCAc1uKFWHWMhgfvr2-hAQlZIN36-6ttUopAC29zo0bd-E54HaRZJE3g8wvG4878P6Na_3-UenNs9eD1Rk73p_m04T4N1hC0Zcgs2u-XK34Ez9lv3uV3ejeZF4MNJw_Qnq2NfEw
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9QwEB6V8hAX3o-FAkaCE0Sb-JHEB4SAsqJqtVokQBUX41daJJpdNllQ_xq_jrGTbIV43HrgmlhJHH8z89kezwfwyNPMcelMguwtT7hDkyqF14k1TAvjnU7jCbkPe8V0Wu7vy9kG_BjOwoS0ysEnRkft5jaskY8Z8nQM3TmV46pPi5htT54vviZBQSrstA5yGh1Edv3xd5y-Nc92tnGsH1M6ef3u1ZukVxhILGdlm2iZ-iI3mcm4o7piVjtkNBajptYlZ1w44QXO26Sp8PWsKmzmMaYaVvG0ci5l-NwzcLbAOWaY-M3Ex_X6Tqi8XnLen9NJWTluePRKQT82xzBKk-yXWBglA_7Ec3_bo42hb3L5f_5pV-BST7jJi85CrsKGr6_B-U6C8_g6vI2K1I0Odkiwd0eLlujakQrZd9KuwqoR0W0bKhGQ9jAwbF-To253i3wJafRkWPIlUVWouQHvT6U_N2Gzntf-NhBqaZEaRzmSNE4rKq0otLE8tSx3PucjeDIMuFp01URUzAJgpergoRAeKsJDZSN4GTCxbhkqgccL8-WB6h2LYhYJtKiYT73n3HKjLXehIpH0XuPQj2BrgIbq3VOjTnAxgofr2-hYwm6Rrv181bUpRS4FtrnVIXH9JTwLki2SjuDpAM2Th_-9Q3f-_S0P4AKiU-3tTHfvwkUaDCVkasgt2GyXK38Pztlv7edmeT9aGoFPp43Sn6ljaAY
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Adversarial+prompt+and+fine-tuning+attacks+threaten+medical+large+language+models&rft.jtitle=Nature+communications&rft.au=Yang%2C+Yifan&rft.au=Jin%2C+Qiao&rft.au=Huang%2C+Furong&rft.au=Lu%2C+Zhiyong&rft.date=2025-10-09&rft.pub=Nature+Publishing+Group&rft.eissn=2041-1723&rft.volume=16&rft.issue=1&rft.spage=9011&rft_id=info:doi/10.1038%2Fs41467-025-64062-1&rft.externalDBID=HAS_PDF_LINK
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2041-1723&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2041-1723&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2041-1723&client=summon