Large Language Models With Contrastive Decoding Algorithm for Hallucination Mitigation in Low‐Resource Languages
ABSTRACT Neural machine translation (NMT) has advanced with deep learning and large‐scale multilingual models, yet translating low‐resource languages often lacks sufficient training data and leads to hallucinations. This often results in translated content that diverges significantly from the source...
Saved in:
| Published in: | CAAI Transactions on Intelligence Technology Vol. 10; no. 4; pp. 1104 - 1117 |
|---|---|
| Main Authors: | , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Wiley
01.08.2025
|
| Subjects: | |
| ISSN: | 2468-2322, 2468-6557, 2468-2322 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | ABSTRACT
Neural machine translation (NMT) has advanced with deep learning and large‐scale multilingual models, yet translating low‐resource languages often lacks sufficient training data and leads to hallucinations. This often results in translated content that diverges significantly from the source text. This research proposes a refined Contrastive Decoding (CD) algorithm that dynamically adjusts weights of log probabilities from strong expert and weak amateur models to mitigate hallucinations in low‐resource NMT and improve translation quality. Advanced large language NMT models, including ChatGLM and LLaMA, are fine‐tuned and implemented for their superior contextual understanding and cross‐lingual capabilities. The refined CD algorithm evaluates multiple candidate translations using BLEU score, semantic similarity, and Named Entity Recognition accuracy. Extensive experimental results show substantial improvements in translation quality and a significant reduction in hallucination rates. Fine‐tuned models achieve higher evaluation metrics compared to baseline models and state‐of‐the‐art models. An ablation study confirms the contributions of each methodological component and highlights the effectiveness of the refined CD algorithm and advanced models in mitigating hallucinations. Notably, the refined methodology increased the BLEU score by approximately 30% compared to baseline models. |
|---|---|
| AbstractList | ABSTRACT Neural machine translation (NMT) has advanced with deep learning and large‐scale multilingual models, yet translating low‐resource languages often lacks sufficient training data and leads to hallucinations. This often results in translated content that diverges significantly from the source text. This research proposes a refined Contrastive Decoding (CD) algorithm that dynamically adjusts weights of log probabilities from strong expert and weak amateur models to mitigate hallucinations in low‐resource NMT and improve translation quality. Advanced large language NMT models, including ChatGLM and LLaMA, are fine‐tuned and implemented for their superior contextual understanding and cross‐lingual capabilities. The refined CD algorithm evaluates multiple candidate translations using BLEU score, semantic similarity, and Named Entity Recognition accuracy. Extensive experimental results show substantial improvements in translation quality and a significant reduction in hallucination rates. Fine‐tuned models achieve higher evaluation metrics compared to baseline models and state‐of‐the‐art models. An ablation study confirms the contributions of each methodological component and highlights the effectiveness of the refined CD algorithm and advanced models in mitigating hallucinations. Notably, the refined methodology increased the BLEU score by approximately 30% compared to baseline models. ABSTRACT Neural machine translation (NMT) has advanced with deep learning and large‐scale multilingual models, yet translating low‐resource languages often lacks sufficient training data and leads to hallucinations. This often results in translated content that diverges significantly from the source text. This research proposes a refined Contrastive Decoding (CD) algorithm that dynamically adjusts weights of log probabilities from strong expert and weak amateur models to mitigate hallucinations in low‐resource NMT and improve translation quality. Advanced large language NMT models, including ChatGLM and LLaMA, are fine‐tuned and implemented for their superior contextual understanding and cross‐lingual capabilities. The refined CD algorithm evaluates multiple candidate translations using BLEU score, semantic similarity, and Named Entity Recognition accuracy. Extensive experimental results show substantial improvements in translation quality and a significant reduction in hallucination rates. Fine‐tuned models achieve higher evaluation metrics compared to baseline models and state‐of‐the‐art models. An ablation study confirms the contributions of each methodological component and highlights the effectiveness of the refined CD algorithm and advanced models in mitigating hallucinations. Notably, the refined methodology increased the BLEU score by approximately 30% compared to baseline models. Neural machine translation (NMT) has advanced with deep learning and large‐scale multilingual models, yet translating low‐resource languages often lacks sufficient training data and leads to hallucinations. This often results in translated content that diverges significantly from the source text. This research proposes a refined Contrastive Decoding (CD) algorithm that dynamically adjusts weights of log probabilities from strong expert and weak amateur models to mitigate hallucinations in low‐resource NMT and improve translation quality. Advanced large language NMT models, including ChatGLM and LLaMA, are fine‐tuned and implemented for their superior contextual understanding and cross‐lingual capabilities. The refined CD algorithm evaluates multiple candidate translations using BLEU score, semantic similarity, and Named Entity Recognition accuracy. Extensive experimental results show substantial improvements in translation quality and a significant reduction in hallucination rates. Fine‐tuned models achieve higher evaluation metrics compared to baseline models and state‐of‐the‐art models. An ablation study confirms the contributions of each methodological component and highlights the effectiveness of the refined CD algorithm and advanced models in mitigating hallucinations. Notably, the refined methodology increased the BLEU score by approximately 30% compared to baseline models. |
| Author | Hongying, Zan Faheem, Muhammad Rashid, Javed Javed, Arifa Abdullah, Muhammad |
| Author_xml | – sequence: 1 givenname: Zan surname: Hongying fullname: Hongying, Zan organization: Zhengzhou University – sequence: 2 givenname: Arifa surname: Javed fullname: Javed, Arifa organization: Zhengzhou University – sequence: 3 givenname: Muhammad surname: Abdullah fullname: Abdullah, Muhammad organization: Zhengzhou University – sequence: 4 givenname: Javed surname: Rashid fullname: Rashid, Javed organization: University of Okara – sequence: 5 givenname: Muhammad surname: Faheem fullname: Faheem, Muhammad email: muhammad.faheem@uwasa.fi organization: VTT Technical Research Center of Finland |
| BookMark | eNp9kMtKAzEUhoNUsF42PkHWQmsmySSdZamXFqYIorgMmeTMmDKdSDK1dOcj-Iw-iWNH1JWr_OR85zvwH6NB4xtA6Dwh44Tw7NK4lo4lIYQfoCHlYjKijNLBn3yEzmJcdUSSZVnK5BCFXIcKcK6baqO7sPQW6oifXPuMZ75pg46tewV8BcZb11R4Wlc-dNM1Ln3Ac13XG-Ma3Trf4KVrXdVH1-Dcbz_e3u8h-k0wvyfiKTosdR3h7Ps9QY831w-z-Si_u13MpvnIcML4yDKWpsyyFGSmqUiNgEJqQjNWMi2FtURaCVkq6CQtjBbWWEYTMyl5AoKDZCdo0Xut1yv1Etxah53y2qn9hw-V0qF1pgZlUmotAOUlFZwYXlAujZATwSno0had66J3meBjDFD--BKivspXX-WrffkdnPTw1tWw-4dUs8UD7Xc-AUY_ioM |
| Cites_doi | 10.1145/3571730 10.18653/v1/W19-5401 10.1111/srt.13524 10.1016/j.dib.2024.110461.68 10.18653/v1/2023.eacl-main.75 10.18653/v1/2023.wmt-1.1 10.18653/v1/2021.findings-acl.120 10.2139/ssrn.4390455 10.32629/jai.v3i2.279 10.1016/j.dib.2024.11021250.66 10.18653/v1/2024.acl-long.586 10.1162/tacl_a_00474 10.18653/v1/2022.emnlp-main.599 10.18653/v1/2021.naacl-main.92 10.18653/v1/2024.eacl-short.4 10.18653/v1/2022.acl-long.26 10.18653/v1/2023.acl-long.3 10.1049/blc2.12081.69 10.1007/s10579‐023‐09704‐w 10.32629/jai.v4i1.359 10.3390/sym13050786 10.1080/08839514.2023.2175112 10.1162/tacl_a_00563 10.18653/v1/2024.eacl-long.155 10.1201/9781003244332‐4 |
| ContentType | Journal Article |
| Copyright | 2025 The Author(s). published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology and Chongqing University of Technology. |
| Copyright_xml | – notice: 2025 The Author(s). published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology and Chongqing University of Technology. |
| DBID | 24P AAYXX CITATION DOA |
| DOI | 10.1049/cit2.70004 |
| DatabaseName | Wiley Online Library Open Access CrossRef Directory of Open Access Journals (DOAJ) |
| DatabaseTitle | CrossRef |
| DatabaseTitleList | CrossRef |
| Database_xml | – sequence: 1 dbid: 24P name: Wiley Online Library Open Access url: https://authorservices.wiley.com/open-science/open-access/browse-journals.html sourceTypes: Publisher – sequence: 2 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website |
| DeliveryMethod | fulltext_linktorsrc |
| EISSN | 2468-2322 |
| EndPage | 1117 |
| ExternalDocumentID | oai_doaj_org_article_c52ddee24f2640c4b247c678642eafdb 10_1049_cit2_70004 CIT270004 |
| Genre | article |
| GrantInformation_xml | – fundername: VTT Technical Research Center of Finland |
| GroupedDBID | 0R~ 1OC 24P AAEDW AAHJG AAJGR AALRI AAMMB AAXUO AAYWO ABMAC ABQXS ACCMX ACESK ACGFS ACVFH ACXQS ADBBV ADCNI ADMLS ADVLN AEFGJ AEUPX AEXQZ AFKRA AFPUW AGXDD AIDQK AIDYY AIGII AITUG AKBMS AKRWK AKYEP ALMA_UNASSIGNED_HOLDINGS ALUQN AMRAJ ARAPS ARCSS AVUZU BCNDV BENPR BGLVJ CCPQU EBS EJD FDB GROUPED_DOAJ HCIFZ IAO ICD IDLOA ITC K7- M41 M43 O9- OK1 PHGZM PHGZT PIMPY PQGLB PUEGO ROL RUI SSZ WIN AAYXX AFFHD CITATION |
| ID | FETCH-LOGICAL-c4034-d33553d35e79a265c6eb7a0293f3a76dd07d7e956285bca6dcd321c8f41e64e73 |
| IEDL.DBID | 24P |
| ISICitedReferencesCount | 1 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001458943800001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2468-2322 2468-6557 |
| IngestDate | Fri Oct 03 12:51:48 EDT 2025 Wed Oct 29 21:09:37 EDT 2025 Fri Aug 29 10:00:06 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 4 |
| Language | English |
| License | Attribution |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c4034-d33553d35e79a265c6eb7a0293f3a76dd07d7e956285bca6dcd321c8f41e64e73 |
| Notes | The authors are highly grateful to their affiliated universities and institutes for providing research facilities. The research work of M. Faheem is supported by VTT Technical Research Center of Finland. Funding |
| OpenAccessLink | https://onlinelibrary.wiley.com/doi/abs/10.1049%2Fcit2.70004 |
| PageCount | 14 |
| ParticipantIDs | doaj_primary_oai_doaj_org_article_c52ddee24f2640c4b247c678642eafdb crossref_primary_10_1049_cit2_70004 wiley_primary_10_1049_cit2_70004_CIT270004 |
| PublicationCentury | 2000 |
| PublicationDate | August 2025 |
| PublicationDateYYYYMMDD | 2025-08-01 |
| PublicationDate_xml | – month: 08 year: 2025 text: August 2025 |
| PublicationDecade | 2020 |
| PublicationTitle | CAAI Transactions on Intelligence Technology |
| PublicationYear | 2025 |
| Publisher | Wiley |
| Publisher_xml | – name: Wiley |
| References | 2023; 58 2019; 3 2021; 4 2023; 55 2023; 11 2021; 22 2023; 37 2019; 1 2024; 53 2024; 54 2020; 33 2024 2024; 38 2021; 13 2023; 24 2020; 3 2020; 2 2023 2022 2023; 29 2021 2019 2018 2024; 1 2022; 10 2022; 1 2024; 46 e_1_2_10_23_1 e_1_2_10_46_1 e_1_2_10_45_1 e_1_2_10_44_1 e_1_2_10_22_1 e_1_2_10_42_1 e_1_2_10_20_1 e_1_2_10_40_1 Khan Z. (e_1_2_10_41_1) 2020; 2 Guan X. (e_1_2_10_2_1) 2024 Chen H. H. (e_1_2_10_43_1) 2024; 46 Fan A. (e_1_2_10_3_1) 2021; 22 Chowdhery A. (e_1_2_10_4_1) 2023; 24 Radford A. (e_1_2_10_9_1) 2019; 1 Wenzek G. (e_1_2_10_10_1) 2021 Rei R. (e_1_2_10_27_1) 2022 e_1_2_10_18_1 e_1_2_10_19_1 e_1_2_10_6_1 e_1_2_10_16_1 e_1_2_10_39_1 e_1_2_10_5_1 e_1_2_10_17_1 e_1_2_10_38_1 e_1_2_10_8_1 e_1_2_10_14_1 e_1_2_10_37_1 e_1_2_10_7_1 e_1_2_10_15_1 e_1_2_10_36_1 e_1_2_10_12_1 e_1_2_10_35_1 e_1_2_10_13_1 e_1_2_10_34_1 e_1_2_10_33_1 e_1_2_10_11_1 e_1_2_10_32_1 Brown T. (e_1_2_10_21_1) 2020; 33 e_1_2_10_31_1 e_1_2_10_30_1 e_1_2_10_29_1 Bawden R. (e_1_2_10_24_1) 2023 e_1_2_10_28_1 e_1_2_10_25_1 e_1_2_10_26_1 e_1_2_10_47_1 |
| References_xml | – volume: 29 issue: 11 year: 2023 article-title: Segmentation and Classification of Skin Lesions Using Hybrid Deep Learning Method in the Internet of Medical Things publication-title: Skin Research and Technology – start-page: 1 year: 2023 end-page: 42 – volume: 1 start-page: 9 issue: 8 year: 2019 article-title: Language Models Are Unsupervised Multitask Learners publication-title: OpenAI blog – start-page: 1059 year: 2023 end-page: 1075 – volume: 37 issue: 1 year: 2023 article-title: A Study on the Evaluation of Tokenizer Performance in Natural Language Processing publication-title: Applied Artificial Intelligence – volume: 33 start-page: 1877 year: 2020 end-page: 1901 article-title: Language Models Are Few‐Shot Learners publication-title: Advances in Neural Information Processing Systems – volume: 4 start-page: 1 issue: 1 year: 2021 end-page: 5 article-title: A Seq to Seq Machine Translation From Urdu to Chinese publication-title: Journal of Autonomous Intelligence – volume: 53 issue: 5 year: 2024 article-title: Cyberattack Patterns in Blockchain‐Based Communication Networks for Distributed Renewable Energy Systems: A Study on Big Datasets publication-title: Data in Brief – start-page: 578 year: 2022 end-page: 585 – year: 2021 – start-page: 78 year: 2023 end-page: 109 article-title: A Review of the Approaches to Neural Machine Translation publication-title: Natural Language Processing and Information Retrieval – volume: 46 start-page: 518 issue: 03 year: 2024 article-title: Chinese‐Urdu Neural Machine Translation Interacting Pos Sequence Prediction in Urdu Language publication-title: Computer Engineering & Science – year: 2024 – start-page: 89 year: 2021 end-page: 99 – volume: 10 start-page: 522 year: 2022 end-page: 538 article-title: The Flores‐101 Evaluation Benchmark for Low‐Resource and Multilingual Machine Translation publication-title: Transactions of the Association for Computational Linguistics – year: 2018 – start-page: 1393 year: 2021) end-page: 1404 – volume: 38 start-page: 18126 year: 2024 end-page: 18134 – volume: 11 start-page: 546 year: 2023 end-page: 564 article-title: Understanding and Detecting Hallucinations in Neural Machine Translation via Model Introspection publication-title: Transactions of the Association for Computational Linguistics – volume: 58 start-page: 1 issue: 2 year: 2023 end-page: 43 article-title: Democratizing Neural Machine Translation With Opus‐Mt publication-title: Language Resources and Evaluation – volume: 13 issue: 5 year: 2021 article-title: Low‐resource Named Entity Recognition via the Pre‐training Model publication-title: Symmetry – start-page: 10879 year: 2024 end-page: 10899 – volume: 22 start-page: 1 issue: 107 year: 2021 end-page: 48 article-title: Beyond English‐Centric Multilingual Machine Translation publication-title: Journal of Machine Learning Research – volume: 1 start-page: 2526 year: 2024 end-page: 2539 – volume: 2 start-page: 29 issue: 4 year: 2020 end-page: 36 article-title: A Study of Neural Machine Translation From Chinese to Urdu publication-title: Journal of Autonomous Intelligence – year: 2022 – year: 2023 – volume: 1 start-page: 320 year: 2022 end-page: 335 – volume: 24 start-page: 1 issue: 240 year: 2023 end-page: 113 article-title: Palm: Scaling Language Modeling With Pathways publication-title: Journal of Machine Learning Research – volume: 55 start-page: 1 issue: 12 year: 2023 end-page: 38 article-title: Survey of Hallucination in Natural Language Generation publication-title: ACM Computing Surveys – volume: 3 start-page: 1 year: 2019 end-page: 10 – start-page: 1 year: 2024 end-page: 15 article-title: A Blockchain‐Based Resilient and Secure Framework for Events Monitoring and Control in Distributed Renewable Energy Systems publication-title: IET Blockchain – volume: 54 issue: 5 year: 2024 article-title: Multilayer Cyber attacks Identification and Classification Using Machine Learning in Internet of Blockchain(IoBC)‐Based Energy Networks publication-title: Data in Brief – year: 2019 – volume: 3 start-page: 34 issue: 2 year: 2020 end-page: 44 article-title: Research on Chinese‐Urdu Machine Translation Based on Deep Learning publication-title: Journal of Autonomous Intelligence – ident: e_1_2_10_8_1 doi: 10.1145/3571730 – ident: e_1_2_10_32_1 doi: 10.18653/v1/W19-5401 – ident: e_1_2_10_47_1 doi: 10.1111/srt.13524 – ident: e_1_2_10_45_1 doi: 10.1016/j.dib.2024.110461.68 – ident: e_1_2_10_5_1 – ident: e_1_2_10_14_1 doi: 10.18653/v1/2023.eacl-main.75 – ident: e_1_2_10_30_1 doi: 10.18653/v1/2023.wmt-1.1 – volume: 33 start-page: 1877 year: 2020 ident: e_1_2_10_21_1 article-title: Language Models Are Few‐Shot Learners publication-title: Advances in Neural Information Processing Systems – ident: e_1_2_10_37_1 doi: 10.18653/v1/2021.findings-acl.120 – volume: 24 start-page: 1 issue: 240 year: 2023 ident: e_1_2_10_4_1 article-title: Palm: Scaling Language Modeling With Pathways publication-title: Journal of Machine Learning Research – ident: e_1_2_10_23_1 doi: 10.2139/ssrn.4390455 – ident: e_1_2_10_35_1 – ident: e_1_2_10_26_1 doi: 10.18653/v1/2023.eacl-main.75 – ident: e_1_2_10_42_1 doi: 10.32629/jai.v3i2.279 – volume: 2 start-page: 29 issue: 4 year: 2020 ident: e_1_2_10_41_1 article-title: A Study of Neural Machine Translation From Chinese to Urdu publication-title: Journal of Autonomous Intelligence – ident: e_1_2_10_44_1 doi: 10.1016/j.dib.2024.11021250.66 – ident: e_1_2_10_38_1 doi: 10.18653/v1/2024.acl-long.586 – ident: e_1_2_10_6_1 doi: 10.1162/tacl_a_00474 – ident: e_1_2_10_13_1 doi: 10.18653/v1/2022.emnlp-main.599 – ident: e_1_2_10_16_1 – ident: e_1_2_10_25_1 doi: 10.18653/v1/2021.naacl-main.92 – ident: e_1_2_10_19_1 – start-page: 18126 volume-title: Proceedings of the AAAI Conference on Artificial Intelligence year: 2024 ident: e_1_2_10_2_1 – ident: e_1_2_10_28_1 doi: 10.18653/v1/2024.eacl-short.4 – ident: e_1_2_10_20_1 – volume-title: Investigating the Translation Performance of a Large Multilingual Language Model: The Case of Bloom year: 2023 ident: e_1_2_10_24_1 – ident: e_1_2_10_17_1 doi: 10.18653/v1/2022.acl-long.26 – ident: e_1_2_10_15_1 doi: 10.18653/v1/2023.acl-long.3 – ident: e_1_2_10_46_1 doi: 10.1049/blc2.12081.69 – ident: e_1_2_10_18_1 – ident: e_1_2_10_31_1 – volume: 1 start-page: 9 issue: 8 year: 2019 ident: e_1_2_10_9_1 article-title: Language Models Are Unsupervised Multitask Learners publication-title: OpenAI blog – ident: e_1_2_10_12_1 – start-page: 89 volume-title: Proceedings of the Sixth Conference on Machine Translation year: 2021 ident: e_1_2_10_10_1 – ident: e_1_2_10_29_1 doi: 10.1007/s10579‐023‐09704‐w – ident: e_1_2_10_40_1 doi: 10.32629/jai.v4i1.359 – ident: e_1_2_10_7_1 – volume: 46 start-page: 518 issue: 03 year: 2024 ident: e_1_2_10_43_1 article-title: Chinese‐Urdu Neural Machine Translation Interacting Pos Sequence Prediction in Urdu Language publication-title: Computer Engineering & Science – ident: e_1_2_10_36_1 doi: 10.3390/sym13050786 – ident: e_1_2_10_22_1 – start-page: 578 volume-title: Proceedings of the Seventh Conference on Machine Translation (WMT) year: 2022 ident: e_1_2_10_27_1 – ident: e_1_2_10_34_1 doi: 10.1080/08839514.2023.2175112 – ident: e_1_2_10_11_1 doi: 10.1162/tacl_a_00563 – volume: 22 start-page: 1 issue: 107 year: 2021 ident: e_1_2_10_3_1 article-title: Beyond English‐Centric Multilingual Machine Translation publication-title: Journal of Machine Learning Research – ident: e_1_2_10_39_1 doi: 10.18653/v1/2024.eacl-long.155 – ident: e_1_2_10_33_1 doi: 10.1201/9781003244332‐4 |
| SSID | ssj0001999537 ssib050169717 ssib050729737 ssib052855658 |
| Score | 2.3058858 |
| Snippet | ABSTRACT
Neural machine translation (NMT) has advanced with deep learning and large‐scale multilingual models, yet translating low‐resource languages often... Neural machine translation (NMT) has advanced with deep learning and large‐scale multilingual models, yet translating low‐resource languages often lacks... ABSTRACT Neural machine translation (NMT) has advanced with deep learning and large‐scale multilingual models, yet translating low‐resource languages often... |
| SourceID | doaj crossref wiley |
| SourceType | Open Website Index Database Publisher |
| StartPage | 1104 |
| SubjectTerms | ChatGLM contrastive decoding hallucination LLAMA LLM low resource NMT |
| SummonAdditionalLinks | – databaseName: Directory of Open Access Journals (DOAJ) dbid: DOA link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1NS8NAEF2kePAiior1iwU9CbHp7iabPdZqqRCLh6q9hf20gdhK2urVn-Bv9Je4u2lLe9GLtyUEJswkM-9NhjcAXFCGDGVUBgkmOiBMxgGnHAeWSwgjk1gxP0TzlNJeLxkM2MPKqi83E1bJA1eOa8gI2S9QI2Js6Q4lEYhQaTOsxc2aGyVc9rWoZ4VM-e6KxT0Rpgs9UsIaMp-iK-ogzFoF8kL968DUV5bODtieQ0LYqh5lF2zo0R4oUzeiDdN5OxG6nWXFBD7n0yF0ilIln7hEBW8sfXTlB7aKl7El-sNXaGEo7PKimMm8avXB-7xS0rDHfATT8cf359eib780MdkHj53bfrsbzPcjBJKEmAQKW7CAFY40ZRzFkYy1oDy0BdxgTmOlQqqotgQIJZGQPFZSYdSUiSFNHRNN8QGojcYjfQggZkYnysEBpIjQghmTYIEFid1yI2Pq4Hzhs-ytksHI_O9rwjLn2cx7tg6unTuXdzjpan_BBjSbBzT7K6B1cOmD8YudrH3XR_509B8Wj8EWclt9_VjfCahNy5k-BZvyfZpPyjP_Yv0AICnVgg priority: 102 providerName: Directory of Open Access Journals |
| Title | Large Language Models With Contrastive Decoding Algorithm for Hallucination Mitigation in Low‐Resource Languages |
| URI | https://onlinelibrary.wiley.com/doi/abs/10.1049%2Fcit2.70004 https://doaj.org/article/c52ddee24f2640c4b247c678642eafdb |
| Volume | 10 |
| WOSCitedRecordID | wos001458943800001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals customDbUrl: eissn: 2468-2322 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001999537 issn: 2468-2322 databaseCode: DOA dateStart: 20180101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources customDbUrl: eissn: 2468-2322 dateEnd: 99991231 omitProxy: false ssIdentifier: ssib050729737 issn: 2468-2322 databaseCode: M~E dateStart: 20160101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre – providerCode: PRVPQU databaseName: Computer Science Database customDbUrl: eissn: 2468-2322 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001999537 issn: 2468-2322 databaseCode: K7- dateStart: 20170601 isFulltext: true titleUrlDefault: http://search.proquest.com/compscijour providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest Central customDbUrl: eissn: 2468-2322 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001999537 issn: 2468-2322 databaseCode: BENPR dateStart: 20170601 isFulltext: true titleUrlDefault: https://www.proquest.com/central providerName: ProQuest – providerCode: PRVPQU databaseName: Publicly Available Content Database customDbUrl: eissn: 2468-2322 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001999537 issn: 2468-2322 databaseCode: PIMPY dateStart: 20170601 isFulltext: true titleUrlDefault: http://search.proquest.com/publiccontent providerName: ProQuest – providerCode: PRVWIB databaseName: Wiley Online Library Free customDbUrl: eissn: 2468-2322 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001999537 issn: 2468-2322 databaseCode: WIN dateStart: 20170101 isFulltext: true titleUrlDefault: https://onlinelibrary.wiley.com providerName: Wiley-Blackwell – providerCode: PRVWIB databaseName: Wiley Online Library Open Access customDbUrl: eissn: 2468-2322 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001999537 issn: 2468-2322 databaseCode: 24P dateStart: 20170101 isFulltext: true titleUrlDefault: https://authorservices.wiley.com/open-science/open-access/browse-journals.html providerName: Wiley-Blackwell |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LSwMxEA5FPXjxgYr1RUBPwuo2yW424MVHi0ItHtR6W_LUhdrKtupN_An-Rn-Jk2xb8SKIlxB2s2SZSTLfTJJvENrjgjguuI4yymzEhE4jySWNwJdQTmepEeEQzW2bdzrZ3Z24qqGjyV2Yih9iGnDzMyOs136CS1VlIQFQC0rUxYgc8DiQgc42GjTziRsIu_qOsAD2SQJpJvHXiwA6kAk_KROH35__sEiBuP8nUA2WprX4v39cQgtjhImPqyGxjGq2v4LKtj_xjdvj6CT2KdB6Q9wtRg_YE1SVcujXPXwG3qi3Zvi4dz8o4e0jBlSLz2Wv96yLKnKIL4uKmAOqRR-3B6-f7x-TbYBpF8NVdNNqXp-eR-N0C5FmMWWRoYA9qKGJ5UKSNNGpVVzGgAcclTw1JuaGW_CnSJYoLVOjDSUNnTnWsCmznK6hmf6gb9cRpsLZzHh0QQxTVgnnMqqoAj1ow5yro92JyPOnilUjD7vhTOReZnmQWR2deG1MW3gm7PBgUN7n44mV64TACm0JcwDtYs0UYVyDBQa_ykpnVB3tBw390k9-enFNQm3jL4030TzxyYDDacAtNDMqn-02mtMvo2JY7oSxuBNcfCgv35pQdi86X-pg5cw |
| linkProvider | Wiley-Blackwell |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1PT9swFLcmmLRdBtOG1o0xS9sJKRBsJ46PrFC1WlpxKJRb5L8QKWuntIwrH4HPyCfBz0lb9YI07WYljhz52e_93vPz7yH0gwviuOA6yiizERM6jSSXNPK-hHI6S40ISTRXOR-NsutrcdHm5sBdmIYfYhVwg50R9DVscAhINw4nA5JMXS7IEY8DG-g280gDKjdMBqN1iMWDnySwZhK4X-SxA1kSlDJxvP58wyQF5v5NpBpMTW_nP39yF71rMSY-bRbFe_TKTj-gOoecb5y38UkMRdCqOZ6Ui1sMFFW1nIPmw2feHwV7hk-rm1nt3_7GHtfivqyqO102sUM8LBtqDt8spzif3T89PC4PAlZDzD-iy975uNuP2oILkWYxZZGhHn1QQxPLhSRpolOruIw9InBU8tSYmBtuvUdFskRpmRptKDnRmWMnNmWW0z20NZ1N7SeEqXA2M4AviGHKKuFcRhVVXhDaMOc66Ptyzos_Da9GEc7DmShgzoowZx30E8Sx6gFc2OHBrL4p2q1V6IR4HW0Jcx7cxZopwrj2Nth7VlY6ozroMIjohXGK7mBMQuvzv3T-ht70x8O8yAejX1_QWwKlgUNu4D7aWtR39it6rf8uynl9EBbmM_Mu5u8 |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3JTsMwELUQIMSFRYAoqyU4IQVS24njI1AqEKHqge0WeS2RSlulLVz5BL6RL8F20qJekBA3K3HkaOyZeTMePwNwTBkylFEZJJjogDAZB5xyHNhYQhiZxIr5IprHlLZayfMza1e1Oe4sTMkPMU24Oc3w9topuB4oUwacxJFkynyETmno2UAXSETrblEj0v5JsVjwE3nWTOTOF1nsgCYEpYSd_Xw-45I8c_8sUvWuprn6z59cAysVxoTn5aJYB3O6twGK1NV8w7TKT0J3CVp3CJ_y0Qt0FFUFHzrLBxs2HnX-DJ53O_3Cvn2FFtfCa97tjmVe5g7hXV5Sc9hm3oNp__3r43OyETAdYrgJHppX95fXQXXhQiBJiEmgsEUfWOFIU8ZRHMlYC8pDiwgM5jRWKqSKahtRoSQSksdKKozqMjGkrmOiKd4C871-T28DiJnRiXL4AikitGDGJFhgYSdCKmJMDRxNZJ4NSl6NzO-HE5Y5mWVeZjVw4aZj2sNxYfsH_aKTVaqVyQhZG60RMRbchZIIRKi0PthGVpobJWrgxE_RL-Nklzf3yLd2_tL5ECy1G80svWnd7oJl5G4G9qWBe2B-VIz1PliUb6N8WBz4dfkNMMvmBg |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Large+Language+Models+With+Contrastive+Decoding+Algorithm+for+Hallucination+Mitigation+in+Low%E2%80%90Resource+Languages&rft.jtitle=CAAI+Transactions+on+Intelligence+Technology&rft.au=Hongying%2C+Zan&rft.au=Javed%2C+Arifa&rft.au=Abdullah%2C+Muhammad&rft.au=Rashid%2C+Javed&rft.date=2025-08-01&rft.issn=2468-2322&rft.eissn=2468-2322&rft.volume=10&rft.issue=4&rft.spage=1104&rft.epage=1117&rft_id=info:doi/10.1049%2Fcit2.70004&rft.externalDBID=10.1049%252Fcit2.70004&rft.externalDocID=CIT270004 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2468-2322&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2468-2322&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2468-2322&client=summon |