Automating the extraction of otology symptoms from clinic letters: a methodological study using natural language processing

Background Most healthcare data is in an unstructured format that requires processing to make it usable for research. Generally, this is done manually, which is both time-consuming and poorly scalable. Natural language processing (NLP) using machine learning offers a method to automate data extracti...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:BMC medical informatics and decision making Ročník 25; číslo 1; s. 353 - 8
Hlavní autoři: Joshi, Nikhil, Noor, Kawsar, Bai, Xi, Forbes, Marina, Ross, Talisa, Barrett, Liam, Dobson, Richard J. B., Schilder, Anne G. M., Mehta, Nishchay, Lilaonitkul, Watjana
Médium: Journal Article
Jazyk:angličtina
Vydáno: London BioMed Central 29.09.2025
BioMed Central Ltd
Springer Nature B.V
BMC
Témata:
ISSN:1472-6947, 1472-6947
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Background Most healthcare data is in an unstructured format that requires processing to make it usable for research. Generally, this is done manually, which is both time-consuming and poorly scalable. Natural language processing (NLP) using machine learning offers a method to automate data extraction. In this paper we describe the development of a set of NLP models to extract and contextualise otology symptoms from free text documents. Methods A dataset of 1,148 otology clinic letters written between 2009 – 2011, from a London NHS hospital, were manually annotated and used to train a hybrid dictionary and machine learning NLP model to identify six key otological symptoms: hearing loss, impairment of balance, otalgia, otorrhoea, tinnitus and vertigo. Subsequently, a set of Bidirectional-Long-Short-Term-Memory (Bi-LSTM) models were trained to extract contextual information for each symptom, for example, defining the laterality of the ear affected. Results There were 1,197 symptom annotations and 2,861 contextual annotations with 24% of patients presenting with hearing loss. The symptom extraction model achieved a macro F1 score of 0.73. The Bi-LSTM models achieved a mean macro F1 score of 0.69 for the contextualisation tasks. Conclusion NLP models for symptom extraction and contextualisation were successfully created and shown to perform well on real life data. Refinement is needed to produce models that can run without manual review. Downstream applications for these models include deep semantic searching in electronic health records, cohort identification for clinical trials and facilitating research into hearing loss phenotypes. Further testing of the external validity of the developed models is required.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1472-6947
1472-6947
DOI:10.1186/s12911-025-03180-8