Naturalness of Attention: Revisiting Attention in Code Language Models

Language models for code such as CodeBERT offer the capability to learn advanced source code representation, but their opacity poses barriers to understanding of captured properties. Recent attention analysis studies provide initial interpretability insights by focusing solely on attention weights r...

Full description

Saved in:
Bibliographic Details
Published in:IEEE/ACM International Conference on Software Engineering: New Ideas and Emerging Technologies Results (Online) pp. 107 - 111
Main Authors: Saad, Mootez, Sharma, Tushar
Format: Conference Proceeding
Language:English
Published: ACM 14.04.2024
Subjects:
ISSN:2832-7632
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Be the first to leave a comment!
You must be logged in first