Automating the extraction of otology symptoms from clinic letters: a methodological study using natural language processing

Background Most healthcare data is in an unstructured format that requires processing to make it usable for research. Generally, this is done manually, which is both time-consuming and poorly scalable. Natural language processing (NLP) using machine learning offers a method to automate data extracti...

Full description

Saved in:
Bibliographic Details
Published in:BMC medical informatics and decision making Vol. 25; no. 1; pp. 353 - 8
Main Authors: Joshi, Nikhil, Noor, Kawsar, Bai, Xi, Forbes, Marina, Ross, Talisa, Barrett, Liam, Dobson, Richard J. B., Schilder, Anne G. M., Mehta, Nishchay, Lilaonitkul, Watjana
Format: Journal Article
Language:English
Published: London BioMed Central 29.09.2025
BioMed Central Ltd
Springer Nature B.V
BMC
Subjects:
ISSN:1472-6947, 1472-6947
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Background Most healthcare data is in an unstructured format that requires processing to make it usable for research. Generally, this is done manually, which is both time-consuming and poorly scalable. Natural language processing (NLP) using machine learning offers a method to automate data extraction. In this paper we describe the development of a set of NLP models to extract and contextualise otology symptoms from free text documents. Methods A dataset of 1,148 otology clinic letters written between 2009 – 2011, from a London NHS hospital, were manually annotated and used to train a hybrid dictionary and machine learning NLP model to identify six key otological symptoms: hearing loss, impairment of balance, otalgia, otorrhoea, tinnitus and vertigo. Subsequently, a set of Bidirectional-Long-Short-Term-Memory (Bi-LSTM) models were trained to extract contextual information for each symptom, for example, defining the laterality of the ear affected. Results There were 1,197 symptom annotations and 2,861 contextual annotations with 24% of patients presenting with hearing loss. The symptom extraction model achieved a macro F1 score of 0.73. The Bi-LSTM models achieved a mean macro F1 score of 0.69 for the contextualisation tasks. Conclusion NLP models for symptom extraction and contextualisation were successfully created and shown to perform well on real life data. Refinement is needed to produce models that can run without manual review. Downstream applications for these models include deep semantic searching in electronic health records, cohort identification for clinical trials and facilitating research into hearing loss phenotypes. Further testing of the external validity of the developed models is required.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1472-6947
1472-6947
DOI:10.1186/s12911-025-03180-8