Large Language Models With Contrastive Decoding Algorithm for Hallucination Mitigation in Low‐Resource Languages

ABSTRACT Neural machine translation (NMT) has advanced with deep learning and large‐scale multilingual models, yet translating low‐resource languages often lacks sufficient training data and leads to hallucinations. This often results in translated content that diverges significantly from the source...

Full description

Saved in:
Bibliographic Details
Published in:CAAI Transactions on Intelligence Technology Vol. 10; no. 4; pp. 1104 - 1117
Main Authors: Hongying, Zan, Javed, Arifa, Abdullah, Muhammad, Rashid, Javed, Faheem, Muhammad
Format: Journal Article
Language:English
Published: Wiley 01.08.2025
Subjects:
ISSN:2468-2322, 2468-6557, 2468-2322
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:ABSTRACT Neural machine translation (NMT) has advanced with deep learning and large‐scale multilingual models, yet translating low‐resource languages often lacks sufficient training data and leads to hallucinations. This often results in translated content that diverges significantly from the source text. This research proposes a refined Contrastive Decoding (CD) algorithm that dynamically adjusts weights of log probabilities from strong expert and weak amateur models to mitigate hallucinations in low‐resource NMT and improve translation quality. Advanced large language NMT models, including ChatGLM and LLaMA, are fine‐tuned and implemented for their superior contextual understanding and cross‐lingual capabilities. The refined CD algorithm evaluates multiple candidate translations using BLEU score, semantic similarity, and Named Entity Recognition accuracy. Extensive experimental results show substantial improvements in translation quality and a significant reduction in hallucination rates. Fine‐tuned models achieve higher evaluation metrics compared to baseline models and state‐of‐the‐art models. An ablation study confirms the contributions of each methodological component and highlights the effectiveness of the refined CD algorithm and advanced models in mitigating hallucinations. Notably, the refined methodology increased the BLEU score by approximately 30% compared to baseline models.
Bibliography:The authors are highly grateful to their affiliated universities and institutes for providing research facilities. The research work of M. Faheem is supported by VTT Technical Research Center of Finland.
Funding
ISSN:2468-2322
2468-6557
2468-2322
DOI:10.1049/cit2.70004