Transformer model with external token memories and attention for PersonaChat

Many existing studies aim to develop a dialog system capable of acting as efficiently and accurately as humans. The prevailing approach involves using large machine-learning models and extensive datasets for training to ensure that token information and the connections between them exist solely with...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Scientific reports Ročník 15; číslo 1; s. 20691 - 11
Hlavní autori: Sun, Taize, Fujita, Katsuhide
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: London Nature Publishing Group UK 01.07.2025
Nature Portfolio
Predmet:
ISSN:2045-2322, 2045-2322
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Many existing studies aim to develop a dialog system capable of acting as efficiently and accurately as humans. The prevailing approach involves using large machine-learning models and extensive datasets for training to ensure that token information and the connections between them exist solely within the model structure. This paper introduces a transformer model with external token memory and attention (Tmema) that is inspired by humans’ ability to define and remember each object in a chat. Tmema can define and remember each object or token in its memory, which is generated through random initialization and updated using backpropagation. In the model’s encoder, we utilized a bidirectional self-attention mechanism and external memory to compute the latent information for each input token. When generating text, the latent information is synchronously added to the corresponding external attention of the token in the one-way self-attention decoder, enhancing the model’s performance. We demonstrate that our proposed model outperforms state-of-the-art approaches on the public PersonaChat dataset across automatic and human evaluations. All code and data used to reproduce the experiments are freely available on https://github.com/Ozawa333/Tmema .
AbstractList Many existing studies aim to develop a dialog system capable of acting as efficiently and accurately as humans. The prevailing approach involves using large machine-learning models and extensive datasets for training to ensure that token information and the connections between them exist solely within the model structure. This paper introduces a transformer model with external token memory and attention (Tmema) that is inspired by humans' ability to define and remember each object in a chat. Tmema can define and remember each object or token in its memory, which is generated through random initialization and updated using backpropagation. In the model's encoder, we utilized a bidirectional self-attention mechanism and external memory to compute the latent information for each input token. When generating text, the latent information is synchronously added to the corresponding external attention of the token in the one-way self-attention decoder, enhancing the model's performance. We demonstrate that our proposed model outperforms state-of-the-art approaches on the public PersonaChat dataset across automatic and human evaluations. All code and data used to reproduce the experiments are freely available on https://github.com/Ozawa333/Tmema .
Many existing studies aim to develop a dialog system capable of acting as efficiently and accurately as humans. The prevailing approach involves using large machine-learning models and extensive datasets for training to ensure that token information and the connections between them exist solely within the model structure. This paper introduces a transformer model with external token memory and attention (Tmema) that is inspired by humans’ ability to define and remember each object in a chat. Tmema can define and remember each object or token in its memory, which is generated through random initialization and updated using backpropagation. In the model’s encoder, we utilized a bidirectional self-attention mechanism and external memory to compute the latent information for each input token. When generating text, the latent information is synchronously added to the corresponding external attention of the token in the one-way self-attention decoder, enhancing the model’s performance. We demonstrate that our proposed model outperforms state-of-the-art approaches on the public PersonaChat dataset across automatic and human evaluations. All code and data used to reproduce the experiments are freely available on https://github.com/Ozawa333/Tmema .
Abstract Many existing studies aim to develop a dialog system capable of acting as efficiently and accurately as humans. The prevailing approach involves using large machine-learning models and extensive datasets for training to ensure that token information and the connections between them exist solely within the model structure. This paper introduces a transformer model with external token memory and attention (Tmema) that is inspired by humans’ ability to define and remember each object in a chat. Tmema can define and remember each object or token in its memory, which is generated through random initialization and updated using backpropagation. In the model’s encoder, we utilized a bidirectional self-attention mechanism and external memory to compute the latent information for each input token. When generating text, the latent information is synchronously added to the corresponding external attention of the token in the one-way self-attention decoder, enhancing the model’s performance. We demonstrate that our proposed model outperforms state-of-the-art approaches on the public PersonaChat dataset across automatic and human evaluations. All code and data used to reproduce the experiments are freely available on https://github.com/Ozawa333/Tmema .
Many existing studies aim to develop a dialog system capable of acting as efficiently and accurately as humans. The prevailing approach involves using large machine-learning models and extensive datasets for training to ensure that token information and the connections between them exist solely within the model structure. This paper introduces a transformer model with external token memory and attention (Tmema) that is inspired by humans' ability to define and remember each object in a chat. Tmema can define and remember each object or token in its memory, which is generated through random initialization and updated using backpropagation. In the model's encoder, we utilized a bidirectional self-attention mechanism and external memory to compute the latent information for each input token. When generating text, the latent information is synchronously added to the corresponding external attention of the token in the one-way self-attention decoder, enhancing the model's performance. We demonstrate that our proposed model outperforms state-of-the-art approaches on the public PersonaChat dataset across automatic and human evaluations. All code and data used to reproduce the experiments are freely available on https://github.com/Ozawa333/Tmema .Many existing studies aim to develop a dialog system capable of acting as efficiently and accurately as humans. The prevailing approach involves using large machine-learning models and extensive datasets for training to ensure that token information and the connections between them exist solely within the model structure. This paper introduces a transformer model with external token memory and attention (Tmema) that is inspired by humans' ability to define and remember each object in a chat. Tmema can define and remember each object or token in its memory, which is generated through random initialization and updated using backpropagation. In the model's encoder, we utilized a bidirectional self-attention mechanism and external memory to compute the latent information for each input token. When generating text, the latent information is synchronously added to the corresponding external attention of the token in the one-way self-attention decoder, enhancing the model's performance. We demonstrate that our proposed model outperforms state-of-the-art approaches on the public PersonaChat dataset across automatic and human evaluations. All code and data used to reproduce the experiments are freely available on https://github.com/Ozawa333/Tmema .
ArticleNumber 20691
Author Sun, Taize
Fujita, Katsuhide
Author_xml – sequence: 1
  givenname: Taize
  surname: Sun
  fullname: Sun, Taize
  email: s227587t@st.go.tuat.ac.jp
  organization: Tokyo University of Agriculture and Technology
– sequence: 2
  givenname: Katsuhide
  surname: Fujita
  fullname: Fujita, Katsuhide
  organization: Tokyo University of Agriculture and Technology
BackLink https://www.ncbi.nlm.nih.gov/pubmed/40594946$$D View this record in MEDLINE/PubMed
BookMark eNp9kc1u3CAUhVGVKEmTvEAXFctu3PJrw6qqRv2JNFK7SNYIm8uMpzakwKSdty-J0yjZlA0Izvmu7O81OgoxAEJvKHlPCVcfsqBSq4Yw2WilJGkOr9AZI0I2jDN29Ox8ii5z3pG6JNOC6hN0KojUQov2DK2vkw3ZxzRDwnN0MOHfY9li-FMgBTvhEn9CwDPMMY2QsQ0O21IglDEGXHv4B6Qcg11tbblAx95OGS4f93N08-Xz9epbs_7-9Wr1ad0Mom1LA97xTnreK819Jx2Tg2DSixZop0GRQfvWEeYpUCG1Y174oaWsV5z3xCrg5-hq4bpod-Y2jbNNBxPtaB4uYtoYm8o4TGB62jvZgreWa0EcVZXZOkvq6PoPOl1ZHxfW7b6fwQ31y5KdXkBfvoRxazbxzlDGqOK6q4R3j4QUf-0hFzOPeYBpsgHiPpuqoOVSMq5q9O3zYU9T_vmoAbYEhhRzTuCfIpSYe-9m8W6qd_Pg3RxqiS-lXMNhA8ns4v5eXv5f6y-z87El
Cites_doi 10.18653/v1/P18-1205
10.3115/v1/D14-1179
10.18653/v1/2021.eacl-main.74
10.1207/s15516709cog1402_1
10.1609/aaai.v32i1.11923
10.3115/1073083.1073135
10.1016/S0022-5371(69)80069-1
10.18653/v1/2020.acl-main.131
10.18653/v1/2020.acl-main.703
10.18653/v1/P19-1608
10.18653/v1/2021.emnlp-main.169
10.1007/978-3-030-29135-8_7
10.1162/neco.1989.1.2.270
10.18653/v1/P19-1363
10.1609/aaai.v37i11.26489
10.1162/neco.1997.9.8.1735
10.1609/aaai.v34i05.6518
10.18653/v1/2021.acl-long.14
10.1016/j.mlwa.2024.100541
ContentType Journal Article
Copyright The Author(s) 2025
2025. The Author(s).
The Author(s) 2025 2025
Copyright_xml – notice: The Author(s) 2025
– notice: 2025. The Author(s).
– notice: The Author(s) 2025 2025
DBID C6C
AAYXX
CITATION
NPM
7X8
5PM
DOA
DOI 10.1038/s41598-025-98850-y
DatabaseName Springer Nature OA Free Journals
CrossRef
PubMed
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Open Access Full Text
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList PubMed



MEDLINE - Academic
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Open Access Full Text
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Biology
EISSN 2045-2322
EndPage 11
ExternalDocumentID oai_doaj_org_article_b1bd56efaa3940d18e146da089305979
PMC12218397
40594946
10_1038_s41598_025_98850_y
Genre Journal Article
GroupedDBID 0R~
4.4
53G
5VS
7X7
88E
88I
8FE
8FH
8FI
8FJ
AAFWJ
AAJSJ
AAKDD
AASML
ABDBF
ABUWG
ACGFS
ACUHS
ADBBV
ADRAZ
AENEX
AEUYN
AFKRA
AFPKN
ALIPV
ALMA_UNASSIGNED_HOLDINGS
AOIJS
AZQEC
BAWUL
BBNVY
BCNDV
BENPR
BHPHI
BPHCQ
BVXVI
C6C
CCPQU
DIK
DWQXO
EBD
EBLON
EBS
ESX
FYUFA
GNUQQ
GROUPED_DOAJ
GX1
HCIFZ
HH5
HMCUK
HYE
KQ8
LK8
M1P
M2P
M7P
M~E
NAO
OK1
PHGZM
PHGZT
PIMPY
PQQKQ
PROAC
PSQYO
RNT
RNTTT
RPM
SNYQT
UKHRP
AAYXX
AFFHD
CITATION
PJZUB
PPXIY
PQGLB
NPM
7X8
PUEGO
5PM
ID FETCH-LOGICAL-c466t-efd375f3b893f75d25c425f46e179e80c9f6d02f1e1459d2f4fc612b833b0a8e3
IEDL.DBID DOA
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001522980000012&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2045-2322
IngestDate Fri Oct 03 12:45:45 EDT 2025
Tue Nov 04 02:04:50 EST 2025
Tue Aug 26 08:58:37 EDT 2025
Mon Jul 21 06:03:38 EDT 2025
Sat Nov 29 07:46:56 EST 2025
Wed Jul 02 02:43:49 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords Attention mechanism
Persona-Chat
Dialogue system
External memory
Encoder–decoder model
Fine-tuning
Language English
License 2025. The Author(s).
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c466t-efd375f3b893f75d25c425f46e179e80c9f6d02f1e1459d2f4fc612b833b0a8e3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
OpenAccessLink https://doaj.org/article/b1bd56efaa3940d18e146da089305979
PMID 40594946
PQID 3226355238
PQPubID 23479
PageCount 11
ParticipantIDs doaj_primary_oai_doaj_org_article_b1bd56efaa3940d18e146da089305979
pubmedcentral_primary_oai_pubmedcentral_nih_gov_12218397
proquest_miscellaneous_3226355238
pubmed_primary_40594946
crossref_primary_10_1038_s41598_025_98850_y
springer_journals_10_1038_s41598_025_98850_y
PublicationCentury 2000
PublicationDate 2025-07-01
PublicationDateYYYYMMDD 2025-07-01
PublicationDate_xml – month: 07
  year: 2025
  text: 2025-07-01
  day: 01
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
– name: England
PublicationTitle Scientific reports
PublicationTitleAbbrev Sci Rep
PublicationTitleAlternate Sci Rep
PublicationYear 2025
Publisher Nature Publishing Group UK
Nature Portfolio
Publisher_xml – name: Nature Publishing Group UK
– name: Nature Portfolio
References 98850_CR29
98850_CR28
JL Elman (98850_CR16) 1990; 14
T Brown (98850_CR21) 2020; 33
C Raffel (98850_CR20) 2020; 21
98850_CR30
98850_CR10
98850_CR32
98850_CR31
98850_CR12
98850_CR11
98850_CR33
98850_CR14
98850_CR36
A Chowdhery (98850_CR5) 2023; 24
98850_CR13
98850_CR35
98850_CR15
98850_CR37
98850_CR6
98850_CR18
98850_CR8
98850_CR9
98850_CR19
98850_CR3
98850_CR4
L Ouyang (98850_CR22) 2022; 35
98850_CR1
98850_CR23
S Hochreiter (98850_CR17) 1997; 9
98850_CR25
98850_CR24
RJ Williams (98850_CR34) 1989; 1
A Radford (98850_CR2) 2019; 1
AM Collins (98850_CR7) 1969; 8
98850_CR27
98850_CR26
References_xml – ident: 98850_CR35
  doi: 10.18653/v1/P18-1205
– ident: 98850_CR18
  doi: 10.3115/v1/D14-1179
– ident: 98850_CR29
  doi: 10.18653/v1/2021.eacl-main.74
– volume: 14
  start-page: 179
  year: 1990
  ident: 98850_CR16
  publication-title: Cogn. Sci.
  doi: 10.1207/s15516709cog1402_1
– ident: 98850_CR33
– ident: 98850_CR19
– ident: 98850_CR8
  doi: 10.1609/aaai.v32i1.11923
– ident: 98850_CR12
– ident: 98850_CR15
– ident: 98850_CR36
  doi: 10.3115/1073083.1073135
– volume: 35
  start-page: 27730
  year: 2022
  ident: 98850_CR22
  publication-title: Adv. Neural Inf. Process. Syst.
– ident: 98850_CR4
– volume: 21
  start-page: 5485
  year: 2020
  ident: 98850_CR20
  publication-title: J. Mach. Learn. Res.
– volume: 8
  start-page: 240
  year: 1969
  ident: 98850_CR7
  publication-title: J. Verbal Learn. Verbal Behav.
  doi: 10.1016/S0022-5371(69)80069-1
– ident: 98850_CR30
  doi: 10.18653/v1/2020.acl-main.131
– volume: 1
  start-page: 9
  year: 2019
  ident: 98850_CR2
  publication-title: OpenAI Blog
– ident: 98850_CR13
  doi: 10.18653/v1/2020.acl-main.703
– ident: 98850_CR25
– ident: 98850_CR37
  doi: 10.18653/v1/P19-1608
– ident: 98850_CR23
– ident: 98850_CR9
  doi: 10.18653/v1/2021.emnlp-main.169
– ident: 98850_CR26
– ident: 98850_CR28
– ident: 98850_CR6
  doi: 10.1007/978-3-030-29135-8_7
– volume: 1
  start-page: 270
  year: 1989
  ident: 98850_CR34
  publication-title: Neural Comput.
  doi: 10.1162/neco.1989.1.2.270
– volume: 24
  start-page: 1
  year: 2023
  ident: 98850_CR5
  publication-title: J. Mach. Learn. Res.
– ident: 98850_CR14
  doi: 10.18653/v1/P19-1363
– ident: 98850_CR10
  doi: 10.1609/aaai.v37i11.26489
– ident: 98850_CR11
– volume: 9
  start-page: 1735
  year: 1997
  ident: 98850_CR17
  publication-title: Neural Comput.
  doi: 10.1162/neco.1997.9.8.1735
– ident: 98850_CR32
– ident: 98850_CR27
  doi: 10.1609/aaai.v34i05.6518
– ident: 98850_CR3
– ident: 98850_CR31
  doi: 10.18653/v1/2021.acl-long.14
– ident: 98850_CR1
  doi: 10.1016/j.mlwa.2024.100541
– volume: 33
  start-page: 1877
  year: 2020
  ident: 98850_CR21
  publication-title: Adv. Neural Inf. Process. Syst.
– ident: 98850_CR24
SSID ssj0000529419
Score 2.4532502
Snippet Many existing studies aim to develop a dialog system capable of acting as efficiently and accurately as humans. The prevailing approach involves using large...
Abstract Many existing studies aim to develop a dialog system capable of acting as efficiently and accurately as humans. The prevailing approach involves using...
SourceID doaj
pubmedcentral
proquest
pubmed
crossref
springer
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Publisher
StartPage 20691
SubjectTerms 639/705/117
639/705/258
Attention mechanism
Dialogue system
Encoder–decoder model
External memory
Fine-tuning
Humanities and Social Sciences
multidisciplinary
Persona-Chat
Science
Science (multidisciplinary)
Title Transformer model with external token memories and attention for PersonaChat
URI https://link.springer.com/article/10.1038/s41598-025-98850-y
https://www.ncbi.nlm.nih.gov/pubmed/40594946
https://www.proquest.com/docview/3226355238
https://pubmed.ncbi.nlm.nih.gov/PMC12218397
https://doaj.org/article/b1bd56efaa3940d18e146da089305979
Volume 15
WOSCitedRecordID wos001522980000012&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Open Access Full Text
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: DOA
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: M~E
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
– providerCode: PRVPQU
  databaseName: Biological Science Database
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: M7P
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/biologicalscijournals
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Health & Medical Collection
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: 7X7
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/healthcomplete
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: BENPR
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Publicly Available Content
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: PIMPY
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/publiccontent
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Science Database
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: M2P
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/sciencejournals
  providerName: ProQuest
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3di9QwEB_0TsEX8dv6sUTwTculSdMkj95xh4K3FDlhfQpJk3CH2JXdnrD_vZO0e9yq6Isv89CmNJnJdH5pJr8BeF37RrlAedlRzcraS1paFmMptBLK4ayJ3uViE3I-V4uFbq-V-ko5YSM98Ki4A1c5L5oQrU01vH2lAvq2txTjLCIDmY_uUamvLaZGVm-m60pPp2QoVwdrjFTpNBkTpVZK0HKzE4kyYf-fUObvyZK_7JjmQHRyD-5OCJK8G3t-H26E_gHcHmtKbh7Cx7MtFA0rkuvckPSvlWzpnsmw_Bp68i1l2OIqmdjek8SxmbMeCT5H2hGfH53b4RF8Pjk-O3pfTiUTyq5umqEM0XMpIneoniiFZ6JDp4x1E9DxgqKdjo2nLFaoRaE9i3XsEOM4xbmjVgX-GPb6ZR-eAhFdFZ0LTlXCYiuna85liNSGUCvXqALebNVnvo_MGCbvaHNlRmUbVLbJyjabAg6Thq9aJlbrfAFtbSZbm3_ZuoBXW_sY9IK0tWH7sLxcG_wsJeSE-KOAJ6O9rl5VJ0oaXTcFqB1L7vRl905_cZ6ZtiuWEaQs4O3W6Gby8fVfBvvsfwz2Odxhabbm1OAXsDesLsNLuNX9GC7WqxnclAuZpZrB_uHxvP00yy6A8pS1SUqU--2H0_bLT6uuC2k
linkProvider Directory of Open Access Journals
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Transformer+model+with+external+token+memories+and+attention+for+PersonaChat&rft.jtitle=Scientific+reports&rft.au=Sun%2C+Taize&rft.au=Fujita%2C+Katsuhide&rft.date=2025-07-01&rft.eissn=2045-2322&rft.volume=15&rft.issue=1&rft.spage=20691&rft_id=info:doi/10.1038%2Fs41598-025-98850-y&rft_id=info%3Apmid%2F40594946&rft.externalDocID=40594946
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2045-2322&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2045-2322&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2045-2322&client=summon