SQUAB: Evaluating LLM robustness to Ambiguous and Unanswerable Questions in Semantic Parsing

Gespeichert in:
Bibliographische Detailangaben
Titel: SQUAB: Evaluating LLM robustness to Ambiguous and Unanswerable Questions in Semantic Parsing
Autoren: Papicchio, Simone, Cagliero, Luca, Papotti, Paolo
Weitere Verfasser: Papicchio, Simone, Cagliero, Luca, Papotti, Paolo
Verlagsinformationen: Association for Computational Linguistics
Publikationsjahr: 2025
Bestand: PORTO@iris (Publications Open Repository TOrino - Politecnico di Torino)
Schlagwörter: Text2SQL, Ambiguity, Dynamic Benchmark, Industry, Unanswerable, Text-to-SQL
Beschreibung: Large Language Models (LLMs) have demonstrated robust performance in Semantic Parsing (SP) for well-defined queries with unambiguous intent and answerable responses. However, practical user questions frequently deviate from these ideal conditions, challenging the applicability of existing benchmarks. To address this issue, we introduce SQUAB, an automatic dataset generator of Ambiguous and Unanswerable questions. SQUAB generates complex, annotated SP tests using a blend of SQL and LLM capabilities. Results show that SQUAB reduces test generation costs by up to 99% compared to human-based solutions while aligning with real-world question patterns. Furthermore, these tests challenge LLM performance while revealing disparities between public and proprietary datasets. This highlights the need for a dynamic, automatic dataset generator as SQUAB. The code is designed for user extension to accommodate new ambiguous and unanswerable patterns and is available at https://github.com/spapicchio/squab.
Publikationsart: conference object
Dateibeschreibung: ELETTRONICO
Sprache: English
Relation: ispartofbook:Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing; Conference on Empirical Methods in Natural Language Processing; firstpage:17937; lastpage:17957; numberofpages:21; https://hdl.handle.net/11583/3005408
DOI: 10.18653/v1/2025.emnlp-main.906
Verfügbarkeit: https://hdl.handle.net/11583/3005408
https://doi.org/10.18653/v1/2025.emnlp-main.906
https://aclanthology.org/2025.emnlp-main.906/
Rights: info:eu-repo/semantics/openAccess ; license:Creative commons ; license uri:http://creativecommons.org/licenses/by/4.0/
Dokumentencode: edsbas.B3CB5026
Datenbank: BASE
Beschreibung
Abstract:Large Language Models (LLMs) have demonstrated robust performance in Semantic Parsing (SP) for well-defined queries with unambiguous intent and answerable responses. However, practical user questions frequently deviate from these ideal conditions, challenging the applicability of existing benchmarks. To address this issue, we introduce SQUAB, an automatic dataset generator of Ambiguous and Unanswerable questions. SQUAB generates complex, annotated SP tests using a blend of SQL and LLM capabilities. Results show that SQUAB reduces test generation costs by up to 99% compared to human-based solutions while aligning with real-world question patterns. Furthermore, these tests challenge LLM performance while revealing disparities between public and proprietary datasets. This highlights the need for a dynamic, automatic dataset generator as SQUAB. The code is designed for user extension to accommodate new ambiguous and unanswerable patterns and is available at https://github.com/spapicchio/squab.
DOI:10.18653/v1/2025.emnlp-main.906