Bangla Image Caption Generation Using Vision Transformer (ViT) Based Model

In the era of digital content and visual communication, Bangla image captioning has emerged as a crucial technology for enhancing accessibility, improving content discoverability, and bridging the language gap for millions of Bangla speakers worldwide. Our work proposes a novel approach combining a...

Full description

Saved in:
Bibliographic Details
Published in:2025 International Conference on Electrical, Computer and Communication Engineering (ECCE) pp. 1 - 6
Main Authors: Sarker, Arpita, Das, Udoy, Murad, Hasan
Format: Conference Proceeding
Language:English
Published: IEEE 13.02.2025
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In the era of digital content and visual communication, Bangla image captioning has emerged as a crucial technology for enhancing accessibility, improving content discoverability, and bridging the language gap for millions of Bangla speakers worldwide. Our work proposes a novel approach combining a vision transformer as a feature extractor with a customized encoder-decoder architecture for Bangla language generation. We use a wide range of metrics, such as BLEU, ROUGE-L, and METEOR, that have been specifically tailored for Bangla to assess the effectiveness of our model. The proposed model performs at the cutting edge with a BLEU score of 0.6572, a ROUGE-L score of 0.6218, and a METEOR score of 0.4513. Comparative analysis with other architectures, such as Xception, ResNet101, ResNet50, and InceptionV3 combined with encoder-decoder models, gives information about both the advantages and drawbacks of several methods for captioning images in Bangla.
DOI:10.1109/ECCE64574.2025.11013210