Chart-to-Experience: Benchmarking Multimodal LLMs for Predicting Experiential Impact of Charts

The field of Multimodal Large Language Models (MLLMs) has made remarkable progress in visual understanding tasks, presenting a vast opportunity to predict the perceptual and emotional impact of charts. However, it also raises concerns, as many applications of LLMs are based on overgeneralized assump...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE Pacific Visualization Symposium s. 340 - 345
Hlavní autoři: Kim, Seon Gyeom, Choi, Jae Young, Rossi, Ryan, Koh, Eunyee, Lee, Tak Yeon
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 22.04.2025
Témata:
ISSN:2165-8773
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The field of Multimodal Large Language Models (MLLMs) has made remarkable progress in visual understanding tasks, presenting a vast opportunity to predict the perceptual and emotional impact of charts. However, it also raises concerns, as many applications of LLMs are based on overgeneralized assumptions from a few examples, lacking sufficient validation of their performance and effectiveness. We introduce Chart-to-Experience, a benchmark dataset comprising 36 charts, evaluated by crowdsourced workers for their impact on seven experiential factors. Using the dataset as ground truth, we evaluated capabilities of state-of-the-art MLLMs on two tasks: direct prediction and pairwise comparison of charts. Our findings imply that MLLMs are not as sensitive as human evaluators when assessing individual charts, but are accurate and reliable in pairwise comparisons.
ISSN:2165-8773
DOI:10.1109/PacificVis64226.2025.00040