Verbatim Data Transcription Failures in LLM Code Generation: A State-Tracking Stress Test

Gespeichert in:
Bibliographische Detailangaben
Titel: Verbatim Data Transcription Failures in LLM Code Generation: A State-Tracking Stress Test
Autoren: Haque, Mohd Ariful, Gupta, Kishor Datta, Rahman, Mohammad Ashiqur, George, Roy
Publikationsjahr: 2026
Bestand: ArXiv.org (Cornell University Library)
Schlagwörter: Software Engineering, Cryptography and Security
Beschreibung: Many real-world software tasks require exact transcription of provided data into code, such as cryptographic constants, protocol test vectors, allowlists, and calibration tables. These tasks are operationally sensitive because small omissions or alterations can remain silent while producing syntactically valid programs. This paper introduces a deliberately minimal transcription-to-code benchmark to isolate this reliability concern in LLM-based code generation. Given a list of high-precision decimal constants, a model must generate Python code that embeds the constants verbatim and performs a simple aggregate computation. We describe the prompting variants, evaluation protocol based on exact-string inclusion, and analysis framework used to characterize state-tracking and long-horizon generation failures. The benchmark is intended as a compact stress test that complements existing code-generation evaluations by focusing on data integrity rather than algorithmic reasoning.
Publikationsart: text
Sprache: unknown
Relation: http://arxiv.org/abs/2601.03640
Verfügbarkeit: http://arxiv.org/abs/2601.03640
Dokumentencode: edsbas.707D699C
Datenbank: BASE