Democratizing Advanced Attribution Analyses of Generative Language Models with the Inseq Toolkit

Gespeichert in:
Bibliographische Detailangaben
Titel: Democratizing Advanced Attribution Analyses of Generative Language Models with the Inseq Toolkit
Autoren: Sarti, Gabriele, Feldhus, Nils, Qi, Jirui, Nissim, Malvina, Bisazza, Arianna
Verlagsinformationen: CEUR Workshop Proceedings (CEUR-WS.org), 2024.
Publikationsjahr: 2024
Schlagwörter: Feature Attribution, Generative Language Models, Python Toolkit, Natural Language Processing
Beschreibung: Inseq1 is a recent toolkit providing an intuitive and optimized interface to conduct feature attribution analyses of generative language models. In this work, we present the latest improvements to the library, including efforts to simplify the attribution of large language models on consumer hardware, additional attribution approaches, and a new client command to detect and attribute context usage in language model generations. We showcase an online demo using Inseq as an attribution backbone for context reliance analysis, and we highlight interesting contextual patterns in language model generations. Ultimately, this release furthers Inseq’s mission of centralizing good interpretability practices and enabling fair and reproducible model evaluations.
Publikationsart: Conference object
Sprache: English
Zugangs-URL: https://research.rug.nl/en/publications/f719d93e-ca37-4965-b935-69bc53a48a4f
https://hdl.handle.net/11370/f719d93e-ca37-4965-b935-69bc53a48a4f
Rights: CC BY
Dokumentencode: edsair.dris...01423..3f1b08ba700891a86a4a076231d8432f
Datenbank: OpenAIRE
Beschreibung
Abstract:Inseq1 is a recent toolkit providing an intuitive and optimized interface to conduct feature attribution analyses of generative language models. In this work, we present the latest improvements to the library, including efforts to simplify the attribution of large language models on consumer hardware, additional attribution approaches, and a new client command to detect and attribute context usage in language model generations. We showcase an online demo using Inseq as an attribution backbone for context reliance analysis, and we highlight interesting contextual patterns in language model generations. Ultimately, this release furthers Inseq’s mission of centralizing good interpretability practices and enabling fair and reproducible model evaluations.