Evaluation of a retrieval-augmented generation system using a Japanese Institutional Nuclear Medicine Manual and large language model-automated scoring

Recent advances in large language models (LLMs) enable domain-specific question answering using external knowledge. However, addressing information that is not included in training data remains a challenge, particularly in nuclear medicine, where examination protocols are frequently updated and vary...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Radiological physics and technology Ročník 18; číslo 3; s. 861 - 876
Hlavní autoři: Fukui, Yusuke, Kawata, Yuhei, Kobashi, Kazumasa, Nagatani, Yukihiro, Iguchi, Harumi
Médium: Journal Article
Jazyk:angličtina
Vydáno: Singapore Springer Nature Singapore 01.09.2025
Springer Nature B.V
Témata:
ISSN:1865-0333, 1865-0341, 1865-0341
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Recent advances in large language models (LLMs) enable domain-specific question answering using external knowledge. However, addressing information that is not included in training data remains a challenge, particularly in nuclear medicine, where examination protocols are frequently updated and vary across institutions. In this study, we developed a retrieval-augmented generation (RAG) system using 40 internal manuals from a single Japanese hospital, each corresponding to a different examination in nuclear medicine. These institution-specific documents were segmented and indexed using a hybrid retrieval strategy combining dense vector search (text-embedding-3-small) and sparse keyword search (BM25). GPT-3.5 and GPT-4o were used with the OpenAI application programming interface (API) for response generation. The quality of the generated answers was assessed using a four-point Likert scale by three certified radiological technologists, of which one held an additional certification in nuclear medicine and another held an additional certification in medical physics. Automated evaluation was conducted using RAGAS metrics, including factual correctness and context recall. The GPT-4o model combined with hybrid retrieval achieved the highest performance, as per expert evaluations. Although traditional string-based metrics such as ROUGE and the Levenshtein distance poorly align with human ratings, RAGAS provided consistent rankings across system configurations, despite showing only a modest correlation with manual scores. These findings demonstrate that integrating examination-specific institutional manuals into RAG frameworks can effectively support domain-specific question answering in nuclear medicine. Moreover, LLM-based evaluation methods such as RAGAS may serve as practical tools to complement expert reviews in developing healthcare-oriented artificial intelligence systems.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1865-0333
1865-0341
1865-0341
DOI:10.1007/s12194-025-00941-y