Translating to a Low-Resource Language with Compiler Feedback: A Case Study on Cangjie

In the rapidly advancing field of software development, the demand for practical code translation tools has surged, driven by the need for interoperability across different programming environments. Existing learning-based approaches often need help with low-resource programming languages that lack...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on software engineering Ročník 51; číslo 9; s. 2671 - 2692
Hlavní autori: Wang, Jun, Su, Chenghao, Ou, Yijie, Li, Yanhui, Tan, Jialiang, Chen, Lin, Zhou, Yuming
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York IEEE 01.09.2025
IEEE Computer Society
Predmet:
ISSN:0098-5589, 1939-3520
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract In the rapidly advancing field of software development, the demand for practical code translation tools has surged, driven by the need for interoperability across different programming environments. Existing learning-based approaches often need help with low-resource programming languages that lack sufficient parallel code corpora for training. To address these limitations, we propose a novel training framework that begins with monolingual seed corpora, generating parallel datasets via back-translation and incorporating compiler feedback to optimize the translation model. As a case study, we apply our method to train a code translation model for a new-born low-resource programming language, Cangjie. We also construct a parallel test dataset for <inline-formula><tex-math notation="LaTeX">\mathsf{Java}</tex-math> <mml:math><mml:mrow><mml:mi mathvariant="sans-serif">Java</mml:mi></mml:mrow></mml:math><inline-graphic xlink:href="chen-ieq1-3594908.gif"/> </inline-formula>-to-<inline-formula><tex-math notation="LaTeX">\mathsf{Cangjie}</tex-math> <mml:math><mml:mrow><mml:mi mathvariant="sans-serif">Cangjie</mml:mi></mml:mrow></mml:math><inline-graphic xlink:href="chen-ieq2-3594908.gif"/> </inline-formula> translation and test cases to evaluate the effectiveness of our approach. Experimental results demonstrate that compiler feedback greatly enhances syntactical correctness, semantic accuracy, and test pass rates of the translated <inline-formula><tex-math notation="LaTeX">\mathsf{Cangjie}</tex-math> <mml:math><mml:mrow><mml:mi mathvariant="sans-serif">Cangjie</mml:mi></mml:mrow></mml:math><inline-graphic xlink:href="chen-ieq3-3594908.gif"/> </inline-formula> code. These findings highlight the potential of our method to support code translation in low-resource settings, expanding the capabilities of learning-based models for programming languages with limited data availability.
AbstractList In the rapidly advancing field of software development, the demand for practical code translation tools has surged, driven by the need for interoperability across different programming environments. Existing learning-based approaches often need help with low-resource programming languages that lack sufficient parallel code corpora for training. To address these limitations, we propose a novel training framework that begins with monolingual seed corpora, generating parallel datasets via back-translation and incorporating compiler feedback to optimize the translation model. As a case study, we apply our method to train a code translation model for a new-born low-resource programming language, Cangjie. We also construct a parallel test dataset for <inline-formula><tex-math notation="LaTeX">\mathsf{Java}</tex-math> <mml:math><mml:mrow><mml:mi mathvariant="sans-serif">Java</mml:mi></mml:mrow></mml:math><inline-graphic xlink:href="chen-ieq1-3594908.gif"/> </inline-formula>-to-<inline-formula><tex-math notation="LaTeX">\mathsf{Cangjie}</tex-math> <mml:math><mml:mrow><mml:mi mathvariant="sans-serif">Cangjie</mml:mi></mml:mrow></mml:math><inline-graphic xlink:href="chen-ieq2-3594908.gif"/> </inline-formula> translation and test cases to evaluate the effectiveness of our approach. Experimental results demonstrate that compiler feedback greatly enhances syntactical correctness, semantic accuracy, and test pass rates of the translated <inline-formula><tex-math notation="LaTeX">\mathsf{Cangjie}</tex-math> <mml:math><mml:mrow><mml:mi mathvariant="sans-serif">Cangjie</mml:mi></mml:mrow></mml:math><inline-graphic xlink:href="chen-ieq3-3594908.gif"/> </inline-formula> code. These findings highlight the potential of our method to support code translation in low-resource settings, expanding the capabilities of learning-based models for programming languages with limited data availability.
In the rapidly advancing field of software development, the demand for practical code translation tools has surged, driven by the need for interoperability across different programming environments. Existing learning-based approaches often need help with low-resource programming languages that lack sufficient parallel code corpora for training. To address these limitations, we propose a novel training framework that begins with monolingual seed corpora, generating parallel datasets via back-translation and incorporating compiler feedback to optimize the translation model. As a case study, we apply our method to train a code translation model for a new-born low-resource programming language, Cangjie. We also construct a parallel test dataset for [Formula Omitted]-to-[Formula Omitted] translation and test cases to evaluate the effectiveness of our approach. Experimental results demonstrate that compiler feedback greatly enhances syntactical correctness, semantic accuracy, and test pass rates of the translated [Formula Omitted] code. These findings highlight the potential of our method to support code translation in low-resource settings, expanding the capabilities of learning-based models for programming languages with limited data availability.
Author Ou, Yijie
Zhou, Yuming
Li, Yanhui
Chen, Lin
Su, Chenghao
Wang, Jun
Tan, Jialiang
Author_xml – sequence: 1
  givenname: Jun
  orcidid: 0009-0001-5491-0443
  surname: Wang
  fullname: Wang, Jun
  email: 602022330025@smail.nju.edu.cn
  organization: State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
– sequence: 2
  givenname: Chenghao
  orcidid: 0000-0003-3273-1222
  surname: Su
  fullname: Su, Chenghao
  email: dz21330021@smail.nju.edu.cn
  organization: State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
– sequence: 3
  givenname: Yijie
  orcidid: 0009-0005-3355-993X
  surname: Ou
  fullname: Ou, Yijie
  email: chenghao_su@smail.nju.edu.cn
  organization: State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
– sequence: 4
  givenname: Yanhui
  orcidid: 0000-0003-2282-7175
  surname: Li
  fullname: Li, Yanhui
  email: lchen@nju.edu.cn
  organization: State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
– sequence: 5
  givenname: Jialiang
  surname: Tan
  fullname: Tan, Jialiang
  email: tanjialiang1@huawei.com
  organization: Huawei, Hangzhou, China
– sequence: 6
  givenname: Lin
  orcidid: 0000-0003-2352-2226
  surname: Chen
  fullname: Chen, Lin
  email: yanhuili@nju.edu.cn
  organization: State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
– sequence: 7
  givenname: Yuming
  orcidid: 0000-0002-4645-2526
  surname: Zhou
  fullname: Zhou, Yuming
  email: zhouyuming@nju.edu.cn
  organization: State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
BookMark eNpFkMFLwzAUh4NMcJvePXgIeO58SZq28TbKpkJBcNNrSdvX2rmlM2kZ--_N2MDT48H3-73HNyEj0xkk5J7BjDFQT-vVYsaBy5mQKlSQXJExU0IFQnIYkTGASgIpE3VDJs5tAEDGsRyTr7XVxm1135qG9h3VNOsOwQe6brAl0kybZtAN0kPbf9O02-3bLVq6RKwKXf480zlNtUO66ofqSDvjN9NsWrwl17XeOry7zCn5XC7W6WuQvb-8pfMsKHkY90EUxTWPi4RBjIpXCrBMRFnVIVZJXSlVgK7LiJc64SC5LDykFVQq5lHNKq7FlDyee_e2-x3Q9fnGP278yVxwyQVjIhSegjNV2s45i3W-t-1O22POID_Zy729_GQvv9jzkYdzpEXEf9zDUcK4-ANQmGwi
CODEN IESEDJ
Cites_doi 10.1109/ase.2015.74
10.18653/v1/2023.findings-emnlp.119
10.1145/1806799.1806848
10.18653/v1/2021.emnlp-main.685
10.1145/1806799.1806831
10.1109/ICSE48619.2023.00072
10.1145/3649506
10.18653/v1/p16-1009
10.1108/ws.2000.07949fab.004
10.18653/v1/2023.nlrse-1.10
10.1145/2591062.2591072
10.1007/978-3-031-44693-1_54
10.1075/target.8.1.04mar
10.18653/v1/D18-1399
10.1145/2491411.2494584
10.18653/v1/2024.acl-long.251
10.1145/3597503.3639226
10.18653/v1/2022.findings-acl.2
10.1145/2661136.2661148
10.18653/v1/2023.findings-emnlp.373
10.1109/eStream.2019.8732170
10.18653/v1/2023.findings-emnlp.337
ContentType Journal Article
Copyright Copyright IEEE Computer Society 2025
Copyright_xml – notice: Copyright IEEE Computer Society 2025
DBID 97E
RIA
RIE
AAYXX
CITATION
JQ2
K9.
DOI 10.1109/TSE.2025.3594908
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
ProQuest Computer Science Collection
ProQuest Health & Medical Complete (Alumni)
DatabaseTitle CrossRef
ProQuest Health & Medical Complete (Alumni)
ProQuest Computer Science Collection
DatabaseTitleList
ProQuest Health & Medical Complete (Alumni)
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1939-3520
EndPage 2692
ExternalDocumentID 10_1109_TSE_2025_3594908
11106812
Genre orig-research
GrantInformation_xml – fundername: Huawei-Nanjing University Software New Technology Joint Laboratory
– fundername: Natural Science Foundation of China
  grantid: 62272221; 62172205; 62172202
  funderid: 10.13039/501100001809
GroupedDBID --Z
-DZ
-~X
.4S
.DC
0R~
29I
3EH
4.4
5GY
5VS
6IK
7WY
7X7
85S
88E
88I
8FE
8FG
8FI
8FJ
8FL
8G5
8R4
8R5
97E
9M8
AAJGR
AASAJ
AAWTH
ABAZT
ABFSI
ABJCF
ABPPZ
ABQJQ
ABUWG
ABVLG
ACGFO
ACGOD
ACIWK
ACNCT
ADBBV
AENEX
AETIX
AFKRA
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ARAPS
ARCSS
ASUFR
ATWAV
AZQEC
BEFXN
BENPR
BEZIV
BFFAM
BGLVJ
BGNUA
BKEBE
BKOMP
BPEOZ
BPHCQ
BVXVI
CCPQU
CS3
DU5
DWQXO
E.L
EBS
EDO
EJD
FRNLG
FYUFA
GNUQQ
GROUPED_ABI_INFORM_RESEARCH
GUQSH
HCIFZ
HMCUK
HZ~
H~9
I-F
IBMZZ
ICLAB
IEDLZ
IFIPE
IFJZH
IPLJI
ITG
ITH
JAVBF
K60
K6V
K6~
K7-
L6V
LAI
M0C
M1P
M1Q
M2O
M2P
M43
M7S
MS~
O9-
OCL
OHT
P2P
P62
PHGZM
PHGZT
PJZUB
PPXIY
PQBIZ
PQBZA
PQGLB
PQQKQ
PROAC
PSQYO
PTHSS
PUEGO
Q2X
RIA
RIE
RNI
RNS
RXW
RZB
S10
TAE
TN5
TWZ
UHB
UKHRP
UPT
UQL
VH1
WH7
XOL
YYP
YZZ
ZCG
AAYXX
AFFHD
CITATION
JQ2
K9.
ID FETCH-LOGICAL-c247t-667f27b8107e92d90ec83cdf4ed8fd99b0afc62ca820525be92a90d9726f1d2a3
IEDL.DBID RIE
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001575768400010&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0098-5589
IngestDate Thu Nov 20 16:00:48 EST 2025
Sat Nov 29 07:31:19 EST 2025
Wed Oct 01 07:05:09 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 9
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c247t-667f27b8107e92d90ec83cdf4ed8fd99b0afc62ca820525be92a90d9726f1d2a3
Notes ObjectType-Case Study-2
SourceType-Scholarly Journals-1
content type line 14
ObjectType-Feature-4
ObjectType-Report-1
ObjectType-Article-3
ORCID 0009-0001-5491-0443
0000-0002-4645-2526
0009-0005-3355-993X
0000-0003-2282-7175
0000-0003-2352-2226
0000-0003-3273-1222
PQID 3252311343
PQPubID 21418
PageCount 22
ParticipantIDs proquest_journals_3252311343
crossref_primary_10_1109_TSE_2025_3594908
ieee_primary_11106812
PublicationCentury 2000
PublicationDate 2025-09-01
PublicationDateYYYYMMDD 2025-09-01
PublicationDate_xml – month: 09
  year: 2025
  text: 2025-09-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on software engineering
PublicationTitleAbbrev TSE
PublicationYear 2025
Publisher IEEE
IEEE Computer Society
Publisher_xml – name: IEEE
– name: IEEE Computer Society
References Szafraniec (ref13) 2022
ref56
ref14
Achiam (ref22) 2023
(ref23) 2024
ref58
ref53
ref52
ref55
ref10
ref54
Hu (ref40) 2022
ref17
Liu (ref47) 2022
ref19
ref18
Kulal (ref35) 2019
Ren (ref37) 2020
Rozière (ref36) 2020
Gao (ref46) 2023
Nijkamp (ref16) 2022
ref50
ref45
ref48
ref44
(ref7) 2024
Touvron (ref20) 2023
ref49
ref9
Lample (ref27) 2017
Yang (ref39) 2024
Rafailov (ref33) 2023
(ref4) 2024
Roziere (ref12) 2020; 33
Dong (ref43) 2022
Lachaux (ref57) 2021; 34
Grubisic (ref59) 2024
Sanchez (ref6) 2024
Roziere (ref25) 2023
(ref38) 2024
Christopoulou (ref41) 2022
ref2
Chiang (ref30) 2025
ref1
Kaplan (ref51) 2020
(ref8) 2024
Ethayarajh (ref32) 2024
Smirnov (ref5) 2024
Chen (ref21) 2021
ref26
Rozière (ref24) 2022
Chen (ref60) 2023
(ref29) Apr. 21, 2025
(ref3) 2024
(ref34) 2024
ref28
Scao (ref15) 2022
Shen (ref42) 2023
(ref31) 2024
Chen (ref11) 2018; 31
ref61
References_xml – year: 2023
  ident: ref22
  article-title: Gpt-4 technical report
– volume: 34
  start-page: 14967
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 2021
  ident: ref57
  article-title: DOBF: A deobfuscation pre-training objective for programming languages
– year: 2024
  ident: ref23
  article-title: Huawei Cangjie
– ident: ref9
  doi: 10.1109/ase.2015.74
– ident: ref54
  doi: 10.18653/v1/2023.findings-emnlp.119
– start-page: 1
  volume-title: Proc. 10th Int. Conf. Learn. Representations (ICLR)
  year: 2022
  ident: ref40
  article-title: LoRA: Low-rank adaptation of large language models
– year: 2024
  ident: ref4
  article-title: C2rust transpiler
– year: 2022
  ident: ref41
  article-title: PanGu-Coder: Program synthesis with function-level language modeling
– year: 2023
  ident: ref42
  article-title: PanGu-Coder2: Boosting large language models for code with ranking feedback
– year: 2022
  ident: ref43
  article-title: A survey on in-context learning
– start-page: 1950
  volume-title: Proc. Adv. Neural Inf. Process. Syst., Annu. Conf. Neural Inf. Process. Syst. (NeurIPS)
  year: 2022
  ident: ref47
  article-title: Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning
– ident: ref1
  doi: 10.1145/1806799.1806848
– year: 2021
  ident: ref21
  article-title: Evaluating large language models trained on code
– year: 2024
  ident: ref8
  article-title: Emscripten transpiler
– year: Apr. 21, 2025
  ident: ref29
  article-title: Text generation and prompting: Message roles and instruction following
– year: 2024
  ident: ref31
  article-title: Huawei Cangjie documents
– year: 2020
  ident: ref51
  article-title: Scaling laws for neural language models
– ident: ref17
  doi: 10.18653/v1/2021.emnlp-main.685
– year: 2023
  ident: ref46
  article-title: Human-like summarization evaluation with ChatGPT
– ident: ref2
  doi: 10.1145/1806799.1806831
– ident: ref14
  doi: 10.1109/ICSE48619.2023.00072
– year: 2023
  ident: ref60
  article-title: Teaching large language models to self-debug
– ident: ref44
  doi: 10.1145/3649506
– ident: ref26
  doi: 10.18653/v1/p16-1009
– year: 2024
  ident: ref5
  article-title: Cxgo transpiler
– year: 2022
  ident: ref15
  article-title: BLOOM: A 176b-parameter open-access multilingual language model
– year: 2025
  ident: ref30
  article-title: Vicuna: An open-source chatbot impressing GPT-4 with 90%* ChatGPT quality
– ident: ref18
  doi: 10.1108/ws.2000.07949fab.004
– ident: ref49
  doi: 10.18653/v1/2023.nlrse-1.10
– start-page: 53728
  volume-title: Proc. Adv. Neural Inf. Process. Syst., Annu. Conf. Neural Inf. Process. Syst. (NeurIPS)
  year: 2023
  ident: ref33
  article-title: Direct preference optimization: Your language model is secretly a reward model
– ident: ref52
  doi: 10.1145/2591062.2591072
– start-page: 20601
  volume-title: Proc. Adv. Neural Inf. Process. Syst., Annu. Conf. Neural Inf. Process. Syst. (NeurIPS)
  year: 2020
  ident: ref36
  article-title: Unsupervised translation of programming languages
– ident: ref50
  doi: 10.1007/978-3-031-44693-1_54
– year: 2022
  ident: ref16
  article-title: CodeGen: An open large language model for code with multi-turn program synthesis
– ident: ref28
  doi: 10.1075/target.8.1.04mar
– year: 2017
  ident: ref27
  article-title: Unsupervised machine translation using monolingual corpora only
– year: 2024
  ident: ref3
  article-title: Facebook hack lang
– year: 2024
  ident: ref59
  article-title: Compiler generated feedback for large language models
– volume: 33
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 2020
  ident: ref12
  article-title: Unsupervised translation of programming languages
– year: 2024
  ident: ref39
  article-title: Qwen2 technical report
– ident: ref56
  doi: 10.18653/v1/D18-1399
– year: 2024
  ident: ref34
  article-title: greengerong/leetcode at huggingface
– ident: ref10
  doi: 10.1145/2491411.2494584
– ident: ref61
  doi: 10.18653/v1/2024.acl-long.251
– year: 2024
  ident: ref6
  article-title: Sharpen transpiler
– start-page: 1
  volume-title: Proc. 10th Int. Conf. Learn. Representations (ICLR)
  year: 2022
  ident: ref24
  article-title: Leveraging automated unit tests for unsupervised code translation
– year: 2024
  ident: ref32
  article-title: KTO: model alignment as prospect theoretic optimization
– ident: ref55
  doi: 10.1145/3597503.3639226
– ident: ref58
  doi: 10.18653/v1/2022.findings-acl.2
– year: 2023
  ident: ref20
  article-title: Llama 2: Open foundation and fine-tuned chat models
– year: 2023
  ident: ref25
  article-title: Code Llama: Open foundation models for code
– year: 2024
  ident: ref7
  article-title: Java2csharp transpiler
– year: 2022
  ident: ref13
  article-title: Code translation with compiler representations
– start-page: 11883
  volume-title: Proc. Adv. Neural Inf. Process. Syst., Annu. Conf. Neural Inf. Process. Syst. (NeurIPS)
  year: 2019
  ident: ref35
  article-title: SPoC: Search-based pseudocode to code
– ident: ref53
  doi: 10.1145/2661136.2661148
– year: 2024
  ident: ref38
  article-title: Huawei Cangjie specification
– ident: ref45
  doi: 10.18653/v1/2023.findings-emnlp.373
– volume: 31
  start-page: 2552
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 2018
  ident: ref11
  article-title: Tree-to-tree neural networks for program translation
– ident: ref19
  doi: 10.1109/eStream.2019.8732170
– ident: ref48
  doi: 10.18653/v1/2023.findings-emnlp.337
– year: 2020
  ident: ref37
  article-title: CodeBLEU: A method for automatic evaluation of code synthesis
SSID ssj0005775
ssib053395008
Score 2.4735234
Snippet In the rapidly advancing field of software development, the demand for practical code translation tools has surged, driven by the need for interoperability...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Index Database
Publisher
StartPage 2671
SubjectTerms Cangjie
Case studies
Codes
compiler feedback
Compilers
Computer languages
Data models
Datasets
Feedback
Java
Large language models
Learning
Programming environments
Programming languages
Reactive power
Software
Software development
Syntactics
Training
Translating
Translation
Transpiler
Title Translating to a Low-Resource Language with Compiler Feedback: A Case Study on Cangjie
URI https://ieeexplore.ieee.org/document/11106812
https://www.proquest.com/docview/3252311343
Volume 51
WOSCitedRecordID wos001575768400010&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1939-3520
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0005775
  issn: 0098-5589
  databaseCode: RIE
  dateStart: 19750101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NS8MwFA9uePDi_Jg4nZKDFw_d2qRJGm9jbHgYQ3CO3Uq-OqbQytYp_vcmWcsQ8eCthaSU9_Lyy3vvl_cAuKOxkIxLGhgmdBDHWgQy0irIEBM0crRv3z9lPmHTabJY8Kfqsrq_C2OM8eQz03OPPpevC7V1obK-tcvQ1ctqgAZjdHdZa8_nYIzUBTIJSXidkwx5f_Y8sp4gIj1MuEt0_cAg31Tl107s4WXc-uePnYDj6hwJBzvFn4IDk5-BVt2jAVYmew7mHowc4S1fwrKAAk6Kz6AO2sNJFa6ELh4L3XS7S6zh2GKaFOrtAQ7g0OIcdHTDL1jk9i1fvq5MG7yMR7PhY1D1UggUilkZUMqs-GVivT3DkeahUQlWOouNTjLNuQxFpihSwp4ICCLSDhI81JwhmkUaCXwBmnmRm0sANTeMIJ4kSuOYGMYjC3-aYoGlxTZDO-C-lm76viuZkXpXI-Sp1UTqNJFWmuiAtpPmflwlyA7o1vpIK6PapBhZrzmKcIyv_ph2DY7c13ccsC5oluutuQGH6qNcbda3fr18A8byvWE
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8QwEB50FfTiW1yfOXjxUG2TJmm8ibgo1kVwFW8lTdJFhVbWXcV_b5JNEREP3lpIaMlk8mVmvpkBOGSpLLkoWWS41FGaahmViVZRhblkiaN9-_4pDznv97PHR3EbktV9LowxxpPPzLF79LF83aiJc5WdWL2MXb2sWZhzrbNCutY3o4Nz2pbIpDQTbVQyFieDuwtrC2J6TKhwoa4fKOTbqvw6iz3A9Jb_-WsrsBRukuhsKvpVmDH1Giy3XRpQUNp1ePBw5Chv9RCNGyRR3nxErdse5cFhiZxHFrnp9pwYoZ5FtVKql1N0hs4t0iFHOPxETW3f6uHzk9mA-97F4PwyCt0UIoVTPo4Y41YAZWbtPSOwFrFRGVG6So3OKi1EGctKMaykvRNQTEs7SIpYC45ZlWgsySZ06qY2W4C0MJxikWVKk5QaLhILgJoRSUqLboZ14ahd3eJ1WjSj8MZGLAoricJJogiS6MKGW83vcWEhu7DbyqMIavVWEGzt5iQhKdn-Y9oBLFwObvIiv-pf78Ci-9KUEbYLnfFoYvZgXr2Pn95G-37vfAHUIcCq
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Translating+to+a+Low-Resource+Language+with+Compiler+Feedback%3A+A+Case+Study+on+Cangjie&rft.jtitle=IEEE+transactions+on+software+engineering&rft.au=Wang%2C+Jun&rft.au=Su%2C+Chenghao&rft.au=Ou%2C+Yijie&rft.au=Li%2C+Yanhui&rft.date=2025-09-01&rft.pub=IEEE+Computer+Society&rft.issn=0098-5589&rft.eissn=1939-3520&rft.volume=51&rft.issue=11&rft.spage=2671&rft.epage=2692&rft_id=info:doi/10.1109%2FTSE.2025.3594908&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0098-5589&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0098-5589&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0098-5589&client=summon