Democratizing Advanced Attribution Analyses of Generative Language Models with the Inseq Toolkit

Uloženo v:
Podrobná bibliografie
Název: Democratizing Advanced Attribution Analyses of Generative Language Models with the Inseq Toolkit
Autoři: Sarti, Gabriele, Feldhus, Nils, Qi, Jirui, Nissim, Malvina, Bisazza, Arianna
Informace o vydavateli: CEUR Workshop Proceedings (CEUR-WS.org), 2024.
Rok vydání: 2024
Témata: Feature Attribution, Generative Language Models, Python Toolkit, Natural Language Processing
Popis: Inseq1 is a recent toolkit providing an intuitive and optimized interface to conduct feature attribution analyses of generative language models. In this work, we present the latest improvements to the library, including efforts to simplify the attribution of large language models on consumer hardware, additional attribution approaches, and a new client command to detect and attribute context usage in language model generations. We showcase an online demo using Inseq as an attribution backbone for context reliance analysis, and we highlight interesting contextual patterns in language model generations. Ultimately, this release furthers Inseq’s mission of centralizing good interpretability practices and enabling fair and reproducible model evaluations.
Druh dokumentu: Conference object
Jazyk: English
Přístupová URL adresa: https://research.rug.nl/en/publications/f719d93e-ca37-4965-b935-69bc53a48a4f
https://hdl.handle.net/11370/f719d93e-ca37-4965-b935-69bc53a48a4f
Rights: CC BY
Přístupové číslo: edsair.dris...01423..3f1b08ba700891a86a4a076231d8432f
Databáze: OpenAIRE
Popis
Abstrakt:Inseq1 is a recent toolkit providing an intuitive and optimized interface to conduct feature attribution analyses of generative language models. In this work, we present the latest improvements to the library, including efforts to simplify the attribution of large language models on consumer hardware, additional attribution approaches, and a new client command to detect and attribute context usage in language model generations. We showcase an online demo using Inseq as an attribution backbone for context reliance analysis, and we highlight interesting contextual patterns in language model generations. Ultimately, this release furthers Inseq’s mission of centralizing good interpretability practices and enabling fair and reproducible model evaluations.