Tabular-textual question answering: From parallel program generation to large language models

Gespeichert in:
Bibliographische Detailangaben
Titel: Tabular-textual question answering: From parallel program generation to large language models
Autoren: Xushuo Tang, Liuyi Chen, Wenke Yang, Zhengyi Yang, Mingchen Ju, Xin Shu, Zihan Yang, Yifu Tang
Quelle: World Wide Web. 28
Verlagsinformationen: Springer Science and Business Media LLC, 2025.
Publikationsjahr: 2025
Beschreibung: Hybrid tabular-textual question answering (HTQA) involves integrating multiple data sources, traditionally managed through LSTM-based step-by-step reasoning. However, such sequential approaches are prone to exposure bias and cumulative errors, limiting their effectiveness. This paper first introduces an innovative parallel program generation method, ConcurGen, aiming to transform this paradigm by simultaneously formulating comprehensive program constructs that seamlessly blend operations and values. This approach not only rectifies the inherent pitfalls of sequential methodologies but also infuses efficiency into the process. Through our further research, we found that some HTQA scenarios extend beyond traditional question-answering, often involving open-ended questions that demand dynamic, context-aware response generation. Therefore, we introduce a second framework that leverages large language models (LLMs) to effectively answer both traditional and open-ended questions. Our method demonstrates substantial improvements over existing models such as FinQANet and MT2Net on benchmarks including ConvFinQA and MultiHiertt, achieving new state-of-the-art performance across multiple evaluation metrics. In addition to its accuracy, it delivers a nearly 21x speedup in program generation, significantly enhancing inference efficiency. Unlike traditional models, our system maintains robust performance as the complexity of numerical reasoning increases, highlighting its adaptability in challenging scenarios. Furthermore, supplementary experiments on the LLM-based framework show that it provides enriched answer justifications while achieving similar performance to ConcurGen on standard benchmarks.
Publikationsart: Article
Sprache: English
ISSN: 1573-1413
1386-145X
DOI: 10.1007/s11280-025-01351-1
Rights: CC BY
Dokumentencode: edsair.doi...........cc544bc730bb7a24ed0fddd53d9e03e6
Datenbank: OpenAIRE