Democratizing Advanced Attribution Analyses of Generative Language Models with the Inseq Toolkit

Saved in:
Bibliographic Details
Title: Democratizing Advanced Attribution Analyses of Generative Language Models with the Inseq Toolkit
Authors: Sarti, Gabriele, Feldhus, Nils, Qi, Jirui, Nissim, Malvina, Bisazza, Arianna
Publisher Information: CEUR Workshop Proceedings (CEUR-WS.org), 2024.
Publication Year: 2024
Subject Terms: Feature Attribution, Generative Language Models, Python Toolkit, Natural Language Processing
Description: Inseq1 is a recent toolkit providing an intuitive and optimized interface to conduct feature attribution analyses of generative language models. In this work, we present the latest improvements to the library, including efforts to simplify the attribution of large language models on consumer hardware, additional attribution approaches, and a new client command to detect and attribute context usage in language model generations. We showcase an online demo using Inseq as an attribution backbone for context reliance analysis, and we highlight interesting contextual patterns in language model generations. Ultimately, this release furthers Inseq’s mission of centralizing good interpretability practices and enabling fair and reproducible model evaluations.
Document Type: Conference object
Language: English
Access URL: https://research.rug.nl/en/publications/f719d93e-ca37-4965-b935-69bc53a48a4f
https://hdl.handle.net/11370/f719d93e-ca37-4965-b935-69bc53a48a4f
Rights: CC BY
Accession Number: edsair.dris...01423..3f1b08ba700891a86a4a076231d8432f
Database: OpenAIRE
Description
Abstract:Inseq1 is a recent toolkit providing an intuitive and optimized interface to conduct feature attribution analyses of generative language models. In this work, we present the latest improvements to the library, including efforts to simplify the attribution of large language models on consumer hardware, additional attribution approaches, and a new client command to detect and attribute context usage in language model generations. We showcase an online demo using Inseq as an attribution backbone for context reliance analysis, and we highlight interesting contextual patterns in language model generations. Ultimately, this release furthers Inseq’s mission of centralizing good interpretability practices and enabling fair and reproducible model evaluations.