Verbatim Data Transcription Failures in LLM Code Generation: A State-Tracking Stress Test

Uloženo v:
Podrobná bibliografie
Název: Verbatim Data Transcription Failures in LLM Code Generation: A State-Tracking Stress Test
Autoři: Haque, Mohd Ariful, Gupta, Kishor Datta, Rahman, Mohammad Ashiqur, George, Roy
Rok vydání: 2026
Sbírka: ArXiv.org (Cornell University Library)
Témata: Software Engineering, Cryptography and Security
Popis: Many real-world software tasks require exact transcription of provided data into code, such as cryptographic constants, protocol test vectors, allowlists, and calibration tables. These tasks are operationally sensitive because small omissions or alterations can remain silent while producing syntactically valid programs. This paper introduces a deliberately minimal transcription-to-code benchmark to isolate this reliability concern in LLM-based code generation. Given a list of high-precision decimal constants, a model must generate Python code that embeds the constants verbatim and performs a simple aggregate computation. We describe the prompting variants, evaluation protocol based on exact-string inclusion, and analysis framework used to characterize state-tracking and long-horizon generation failures. The benchmark is intended as a compact stress test that complements existing code-generation evaluations by focusing on data integrity rather than algorithmic reasoning.
Druh dokumentu: text
Jazyk: unknown
Relation: http://arxiv.org/abs/2601.03640
Dostupnost: http://arxiv.org/abs/2601.03640
Přístupové číslo: edsbas.707D699C
Databáze: BASE
Popis
Abstrakt:Many real-world software tasks require exact transcription of provided data into code, such as cryptographic constants, protocol test vectors, allowlists, and calibration tables. These tasks are operationally sensitive because small omissions or alterations can remain silent while producing syntactically valid programs. This paper introduces a deliberately minimal transcription-to-code benchmark to isolate this reliability concern in LLM-based code generation. Given a list of high-precision decimal constants, a model must generate Python code that embeds the constants verbatim and performs a simple aggregate computation. We describe the prompting variants, evaluation protocol based on exact-string inclusion, and analysis framework used to characterize state-tracking and long-horizon generation failures. The benchmark is intended as a compact stress test that complements existing code-generation evaluations by focusing on data integrity rather than algorithmic reasoning.