Transformer model with external token memories and attention for PersonaChat

Many existing studies aim to develop a dialog system capable of acting as efficiently and accurately as humans. The prevailing approach involves using large machine-learning models and extensive datasets for training to ensure that token information and the connections between them exist solely with...

Full description

Saved in:
Bibliographic Details
Published in:Scientific reports Vol. 15; no. 1; pp. 20691 - 11
Main Authors: Sun, Taize, Fujita, Katsuhide
Format: Journal Article
Language:English
Published: London Nature Publishing Group UK 01.07.2025
Nature Portfolio
Subjects:
ISSN:2045-2322, 2045-2322
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Many existing studies aim to develop a dialog system capable of acting as efficiently and accurately as humans. The prevailing approach involves using large machine-learning models and extensive datasets for training to ensure that token information and the connections between them exist solely within the model structure. This paper introduces a transformer model with external token memory and attention (Tmema) that is inspired by humans’ ability to define and remember each object in a chat. Tmema can define and remember each object or token in its memory, which is generated through random initialization and updated using backpropagation. In the model’s encoder, we utilized a bidirectional self-attention mechanism and external memory to compute the latent information for each input token. When generating text, the latent information is synchronously added to the corresponding external attention of the token in the one-way self-attention decoder, enhancing the model’s performance. We demonstrate that our proposed model outperforms state-of-the-art approaches on the public PersonaChat dataset across automatic and human evaluations. All code and data used to reproduce the experiments are freely available on https://github.com/Ozawa333/Tmema .
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-025-98850-y