Fusion of deep transfer learning models with Gannet optimisation algorithm for an advanced image captioning system for visual disabilities

The issue of generating a natural language explanation of images to define their visual content has garnered significant attention in computer vision (CV) and natural language processing (NLP). It is driven by applications such as image virtual assistants, indexing and retrieval, image perception, a...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Scientific reports Ročník 15; číslo 1; s. 40446 - 20
Hlavní autoři: Alkhaldi, Tareq M., Asiri, Mashael M., Alzahrani, Fahad, Sharif, Mahir Mohammed
Médium: Journal Article
Jazyk:angličtina
Vydáno: London Nature Publishing Group UK 18.11.2025
Nature Publishing Group
Nature Portfolio
Témata:
ISSN:2045-2322, 2045-2322
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The issue of generating a natural language explanation of images to define their visual content has garnered significant attention in computer vision (CV) and natural language processing (NLP). It is driven by applications such as image virtual assistants, indexing and retrieval, image perception, and assistance for visually challenged people. While this kind of person utilizes other senses, such as hearing and touch, for identifying events and objects, their quality of life is reduced to a typical level. Automated Image captioning generates captions that will be spoken aloud to individuals with disabilities, thereby recognizing objects and events happening nearby them. With the aid of image captioning techniques and artificial intelligence (AI) speech recognition methods, visually impaired individuals can quickly understand the content of an image, as these methods can automatically generate text captions that accurately describe the image’s content. Therefore, this study presents a novel Fusion of Deep Transfer Learning Models and the Gannet Optimisation Algorithm for an Advanced Image Captioning System for Visual Disabilities (FDTLGO-AICSVD) model. The aim is to present a robust and efficient image captioning framework specifically designed to assist visually impaired persons through precise and descriptive image-to-text conversion. Initially, the FDTLGO-AICSVD approach comprises two distinct types of image preprocessing: noise removal and contrast enhancement, aimed at improving the clarity of visual features. Text preprocessing involves distinct steps to standardize and prepare the textual data for analysis. Furthermore, DenseNet121, VGG19, and MobileNetV2 models are utilized for extracting features from image data, whereas Term Frequency Inverse Document Frequency (TF-IDF) is applied for extracting features from text data. To achieve optimal performance, the Gannet optimization algorithm (GOA) model is employed for hyperparameter tuning, enabling the method to generate precise and context-aware captions. A wide range of experimentation of the FDTLGO-AICSVD method is performed under the Flickr8k and Flickr30k datasets. The comparison study of the FDTLGO-AICSVD method portrayed a superior BLEU-4 score of 45.11% over the Flickr8K dataset and 58.91% over the Flickr30K dataset, along with a significantly higher CIDEr score of 63.17 on Flickr8K and 69.81 on Flickr30K, demonstrating the enhanced descriptive accuracy and language generation capability of the model across both datasets.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-025-24171-9