Generating neural architectures from parameter spaces for multi-agent reinforcement learning

We explore a data-driven approach to generating neural network parameters to determine whether generative models can capture the underlying distribution of a collection of neural network checkpoints. We compile a dataset of checkpoints from neural networks trained within the multi-agent reinforcemen...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition letters Vol. 185; pp. 272 - 278
Main Authors: Artaud, Corentin, De-Silva, Varuna, Pina, Rafael, Shi, Xiyu
Format: Journal Article
Language:English
Published: Elsevier B.V 01.09.2024
Subjects:
ISSN:0167-8655
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract We explore a data-driven approach to generating neural network parameters to determine whether generative models can capture the underlying distribution of a collection of neural network checkpoints. We compile a dataset of checkpoints from neural networks trained within the multi-agent reinforcement learning framework, thus potentially producing previously unseen combinations of neural network parameters. In particular, our generative model is a conditional transformer-based variational autoencoder that, when provided with random noise and a specified performance metric – in our context, returns – predicts the appropriate distribution over the parameter space to achieve the desired performance metric. Our method successfully generates parameters for a specified optimal return without further fine-tuning. We also show that the parameters generated using this approach are more constrained and less variable and, most importantly, perform on par with those trained directly under the multi-agent reinforcement learning framework. We test our method on the neural network architectures commonly employed in the most advanced state-of-the-art algorithms. •Variational autoencoders with multi-head self-attention architecture.•Generate dataset of neural networks and augmentation processes.•Generating from random noise neural network parameter conditioned on return.•Discuss implications in the context of MARL and perform analysis of generations vs traditional.
AbstractList We explore a data-driven approach to generating neural network parameters to determine whether generative models can capture the underlying distribution of a collection of neural network checkpoints. We compile a dataset of checkpoints from neural networks trained within the multi-agent reinforcement learning framework, thus potentially producing previously unseen combinations of neural network parameters. In particular, our generative model is a conditional transformer-based variational autoencoder that, when provided with random noise and a specified performance metric – in our context, returns – predicts the appropriate distribution over the parameter space to achieve the desired performance metric. Our method successfully generates parameters for a specified optimal return without further fine-tuning. We also show that the parameters generated using this approach are more constrained and less variable and, most importantly, perform on par with those trained directly under the multi-agent reinforcement learning framework. We test our method on the neural network architectures commonly employed in the most advanced state-of-the-art algorithms. •Variational autoencoders with multi-head self-attention architecture.•Generate dataset of neural networks and augmentation processes.•Generating from random noise neural network parameter conditioned on return.•Discuss implications in the context of MARL and perform analysis of generations vs traditional.
Author De-Silva, Varuna
Pina, Rafael
Artaud, Corentin
Shi, Xiyu
Author_xml – sequence: 1
  givenname: Corentin
  orcidid: 0009-0002-0387-235X
  surname: Artaud
  fullname: Artaud, Corentin
  email: c.artaud2@lboro.ac.uk
– sequence: 2
  givenname: Varuna
  surname: De-Silva
  fullname: De-Silva, Varuna
  email: v.d.de-silva2@lboro.ac.uk
– sequence: 3
  givenname: Rafael
  surname: Pina
  fullname: Pina, Rafael
  email: r.m.pina@lboro.ac.uk
– sequence: 4
  givenname: Xiyu
  surname: Shi
  fullname: Shi, Xiyu
  email: x.shi@lboro.ac.uk
BookMark eNqFkE9LAzEQxXOoYFv9Bh7yBXZN9n89CFK0CoIXvQlhNpmtKbvZZZIKfnuz1JMHPQ3zZt6D91uxhRsdMnYlRSqFrK4P6QSBUKeZyIpU1KmQ-YIt46lOmqosz9nK-4MQoso3zZK979AhQbBuzx0eCXoOpD9sQB2OhJ53NA58AoIBAxL3E-hZHYkPxz7YBPboAie0Lmoah3nrEcjFxAt21kHv8fJnrtnbw_3r9jF5ftk9be-eE52XWUg6EDWUuSlMJTFr86wrG2xl05gKtQSQTVFLBJl1Ao3BosK26FqJWEltNk2Rr9nNKVfT6D1hp7QNsdPoAoHtlRRqZqMO6sRGzWyUqFVkE83FL_NEdgD6-s92e7JhLPZpkZTXFp1GY-NrUGa0fwd8Aw87iEY
CitedBy_id crossref_primary_10_1016_j_asoc_2025_112986
ContentType Journal Article
Copyright 2024 The Author(s)
Copyright_xml – notice: 2024 The Author(s)
DBID 6I.
AAFTH
AAYXX
CITATION
DOI 10.1016/j.patrec.2024.07.013
DatabaseName ScienceDirect Open Access Titles
Elsevier:ScienceDirect:Open Access
CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EndPage 278
ExternalDocumentID 10_1016_j_patrec_2024_07_013
S0167865524002162
GroupedDBID --K
--M
.DC
.~1
0R~
123
1B1
1RT
1~.
1~5
29O
4.4
457
4G.
53G
5VS
6I.
7-5
71M
8P~
9JN
AABNK
AACTN
AAEDT
AAEDW
AAFTH
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXKI
AAXUO
AAYFN
ABBOA
ABFNM
ABFRF
ABJNI
ABMAC
ABXDB
ACDAQ
ACGFO
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
ADMXK
ADTZH
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJOXV
AKRWK
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EJD
EO8
EO9
EP2
EP3
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-Q
G8K
GBLVA
GBOLZ
HLZ
HVGLF
HZ~
IHE
J1W
JJJVA
KOM
LG9
LY1
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
RNS
ROL
RPZ
SBC
SDF
SDG
SDP
SDS
SES
SEW
SPC
SPCBC
SST
SSV
SSZ
T5K
TN5
UNMZH
VOH
WH7
WUQ
XPP
Y6R
ZMT
~G-
9DU
AATTM
AAYWO
AAYXX
ABDPE
ABWVN
ACLOT
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKYEP
ANKPU
APXCP
CITATION
EFKBS
EFLBG
~HD
ID FETCH-LOGICAL-c352t-fa07a53d4d61e2b32f58eb188d6ec1aa18471ea12f0edde46eb4fb1ee61cd9843
ISICitedReferencesCount 1
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001315824900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0167-8655
IngestDate Sat Nov 29 03:58:59 EST 2025
Tue Nov 18 22:15:41 EST 2025
Sat Sep 14 18:10:36 EDT 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords Multi-agent reinforcement learning
Transformers
Parameter generation
Generative models
Neural networks
Language English
License This is an open access article under the CC BY license.
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c352t-fa07a53d4d61e2b32f58eb188d6ec1aa18471ea12f0edde46eb4fb1ee61cd9843
ORCID 0009-0002-0387-235X
OpenAccessLink https://dx.doi.org/10.1016/j.patrec.2024.07.013
PageCount 7
ParticipantIDs crossref_citationtrail_10_1016_j_patrec_2024_07_013
crossref_primary_10_1016_j_patrec_2024_07_013
elsevier_sciencedirect_doi_10_1016_j_patrec_2024_07_013
PublicationCentury 2000
PublicationDate September 2024
2024-09-00
PublicationDateYYYYMMDD 2024-09-01
PublicationDate_xml – month: 09
  year: 2024
  text: September 2024
PublicationDecade 2020
PublicationTitle Pattern recognition letters
PublicationYear 2024
Publisher Elsevier B.V
Publisher_xml – name: Elsevier B.V
References Roeder, Metz, Kingma (b28) 2021; 139
Sohn, Lee, Yan (b10) 2015; vol. 28
Rashid, Samvelyan, Schroeder, Farquhar, Foerster, Whiteson (b21) 2018; vol. 80
Rezende, Mohamed (b3) 2015; vol. 37
Kingma, Rezende, Mohamed, Welling (b9) 2014
Touvron, Lavril, Izacard, Martinet, Lachaux, Lacroix, Rozière, Goyal, Hambro, Azhar, Rodriguez, Joulin, Grave, Lample (b14) 2023
Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, Polosukhin (b12) 2017; vol. 30
Ho, Jain, Abbeel (b4) 2020; vol. 33
Pol, Berger, Germain, Cerminara, Pierini (b11) 2019
Pina, De Silva, Hook, Kondoz (b22) 2022
Sunehag, Lever, Gruslys, Czarnecki, Zambaldi, Jaderberg, Lanctot, Sonnerat, Leibo, Tuyls, Graepel (b20) 2017
Hinton (b27) 2018
Schürholt, Kostadinov, Borth (b18) 2021; 34
Samvelyan, Rashid, de Witt, Farquhar, Nardelli, Rudner, Hung, Torr, Foerster, Whiteson (b6) 2019
Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly, Uszkoreit, Houlsby (b16) 2021
Radford, Wu, Child, Luan, Amodei, Sutskever (b13) 2019
Ha, Dai, Le (b5) 2017
Schürholt, Knyazev, i Nieto, Borth (b19) 2022
Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, Bengio (b2) 2014; vol. 27
Touvron, Martin, Stone, Albert, Almahairi, Babaei, Bashlykov, Batra, Bhargava, Bhosale, Bikel, Blecher, Ferrer, Chen, Cucurull, Esiobu, Fernandes, Fu, Fu, Fuller, Gao, Goswami, Goyal, Hartshorn, Hosseini, Hou, Inan, Kardas, Kerkez, Khabsa, Kloumann, Korenev, Koura, Lachaux, Lavril, Lee, Liskovich, Lu, Mao, Martinet, Mihaylov, Mishra, Molybog, Nie, Poulton, Reizenstein, Rungta, Saladi, Schelten, Silva, Smith, Subramanian, Tan, Tang, Taylor, Williams, Kuan, Xu, Yan, Zarov, Zhang, Fan, Kambadur, Narang, Rodriguez, Stojnic, Edunov, Scialom (b15) 2023
Chen, Lu, Rajeswaran, Lee, Grover, Laskin, Abbeel, Srinivas, Mordatch (b17) 2021
Huang, Li, He, Sun, Tan (b7) 2018
Kingma, Welling (b1) 2013
Wang, Ren, Liu, Yu, Zhang (b26) 2021
Lin, Clark, Birke, Schönborn, Trigoni, Roberts (b8) 2020
Wang, Xu, Zhou, Zang, Darrell, Liu, You (b24) 2024
Son, Kim, Kang, Hostallero, Yi (b25) 2019; vol. 97
Peebles, Radosavovic, Brooks, Efros, Malik (b23) 2022
Peebles (10.1016/j.patrec.2024.07.013_b23) 2022
Chen (10.1016/j.patrec.2024.07.013_b17) 2021
Roeder (10.1016/j.patrec.2024.07.013_b28) 2021; 139
Wang (10.1016/j.patrec.2024.07.013_b24) 2024
Goodfellow (10.1016/j.patrec.2024.07.013_b2) 2014; vol. 27
Touvron (10.1016/j.patrec.2024.07.013_b15) 2023
Dosovitskiy (10.1016/j.patrec.2024.07.013_b16) 2021
Ha (10.1016/j.patrec.2024.07.013_b5) 2017
Rezende (10.1016/j.patrec.2024.07.013_b3) 2015; vol. 37
Hinton (10.1016/j.patrec.2024.07.013_b27) 2018
Lin (10.1016/j.patrec.2024.07.013_b8) 2020
Kingma (10.1016/j.patrec.2024.07.013_b1) 2013
Radford (10.1016/j.patrec.2024.07.013_b13) 2019
Pina (10.1016/j.patrec.2024.07.013_b22) 2022
Kingma (10.1016/j.patrec.2024.07.013_b9) 2014
Ho (10.1016/j.patrec.2024.07.013_b4) 2020; vol. 33
Rashid (10.1016/j.patrec.2024.07.013_b21) 2018; vol. 80
Touvron (10.1016/j.patrec.2024.07.013_b14) 2023
Son (10.1016/j.patrec.2024.07.013_b25) 2019; vol. 97
Sohn (10.1016/j.patrec.2024.07.013_b10) 2015; vol. 28
Schürholt (10.1016/j.patrec.2024.07.013_b19) 2022
Samvelyan (10.1016/j.patrec.2024.07.013_b6) 2019
Pol (10.1016/j.patrec.2024.07.013_b11) 2019
Vaswani (10.1016/j.patrec.2024.07.013_b12) 2017; vol. 30
Sunehag (10.1016/j.patrec.2024.07.013_b20) 2017
Schürholt (10.1016/j.patrec.2024.07.013_b18) 2021; 34
Huang (10.1016/j.patrec.2024.07.013_b7) 2018
Wang (10.1016/j.patrec.2024.07.013_b26) 2021
References_xml – year: 2021
  ident: b17
  article-title: Decision transformer: Reinforcement learning via sequence modeling
– year: 2021
  ident: b26
  article-title: {QPLEX}: Duplex dueling multi-agent Q-learning
  publication-title: International Conference on Learning Representations
– year: 2022
  ident: b23
  article-title: Learning to learn with generative models of neural network checkpoints
– year: 2023
  ident: b15
  article-title: Llama 2: Open foundation and fine-tuned chat models
– year: 2014
  ident: b9
  article-title: Semi-supervised learning with deep generative models
– volume: 34
  start-page: 16481
  year: 2021
  end-page: 16493
  ident: b18
  article-title: Self-supervised representation learning on neural network weights for model characteristic prediction
  publication-title: Adv. Neural Inf. Process. Syst.
– volume: vol. 80
  start-page: 4295
  year: 2018
  end-page: 4304
  ident: b21
  article-title: QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning
  publication-title: Proceedings of the 35th International Conference on Machine Learning
– year: 2017
  ident: b5
  article-title: HyperNetworks
  publication-title: International Conference on Learning Representations
– start-page: 4322
  year: 2020
  end-page: 4326
  ident: b8
  article-title: Anomaly detection for time series using vae-lstm hybrid model
  publication-title: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing
– year: 2013
  ident: b1
  article-title: Auto-encoding variational Bayes
– start-page: 1651
  year: 2019
  end-page: 1657
  ident: b11
  article-title: Anomaly detection with conditional variational autoencoders
  publication-title: 2019 18th IEEE International Conference on Machine Learning and Applications
– year: 2022
  ident: b19
  article-title: Hyper-representations as generative models: Sampling unseen neural network weights
  publication-title: Advances in Neural Information Processing Systems
– year: 2019
  ident: b13
  article-title: Language models are unsupervised multitask learners
– volume: vol. 27
  year: 2014
  ident: b2
  article-title: Generative adversarial nets
  publication-title: Advances in Neural Information Processing Systems
– year: 2018
  ident: b7
  article-title: IntroVAE: Introspective variational autoencoders for photographic image synthesis
– volume: vol. 33
  start-page: 6840
  year: 2020
  end-page: 6851
  ident: b4
  article-title: Denoising diffusion probabilistic models
  publication-title: Advances in Neural Information Processing Systems
– year: 2021
  ident: b16
  article-title: An image is worth 16 × 16 words: Transformers for image recognition at scale
– year: 2024
  ident: b24
  article-title: Neural network diffusion
– volume: vol. 37
  start-page: 1530
  year: 2015
  end-page: 1538
  ident: b3
  article-title: Variational inference with normalizing flows
  publication-title: Proceedings of the 32nd International Conference on Machine Learning
– year: 2023
  ident: b14
  article-title: LLaMA: Open and efficient foundation language models
– volume: vol. 30
  year: 2017
  ident: b12
  article-title: Attention is all you need
  publication-title: Advances in Neural Information Processing Systems
– year: 2018
  ident: b27
  article-title: RMSProp: A mini-batch version of RProp
– volume: 139
  start-page: 9030
  year: 2021
  end-page: 9039
  ident: b28
  article-title: On linear identifiability of learned representations
  publication-title: Proceedings of the 38th International Conference on Machine Learning
– volume: vol. 28
  year: 2015
  ident: b10
  article-title: Learning structured output representation using deep conditional generative models
  publication-title: Advances in Neural Information Processing Systems
– year: 2019
  ident: b6
  article-title: The StarCraft multi-agent challenge, corr abs/1902.04043
– volume: vol. 97
  start-page: 5887
  year: 2019
  end-page: 5896
  ident: b25
  article-title: QTRAN: Learning to factorize with transformation for cooperative multi-agent reinforcement learning
  publication-title: Proceedings of the 36th International Conference on Machine Learning
– year: 2017
  ident: b20
  article-title: Value-decomposition networks for cooperative multi-agent learning
– year: 2022
  ident: b22
  article-title: Residual q-networks for value function factorizing in multiagent reinforcement learning
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
– year: 2014
  ident: 10.1016/j.patrec.2024.07.013_b9
– year: 2022
  ident: 10.1016/j.patrec.2024.07.013_b23
– year: 2022
  ident: 10.1016/j.patrec.2024.07.013_b19
  article-title: Hyper-representations as generative models: Sampling unseen neural network weights
– year: 2021
  ident: 10.1016/j.patrec.2024.07.013_b17
– volume: vol. 80
  start-page: 4295
  year: 2018
  ident: 10.1016/j.patrec.2024.07.013_b21
  article-title: QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning
– year: 2021
  ident: 10.1016/j.patrec.2024.07.013_b26
  article-title: {QPLEX}: Duplex dueling multi-agent Q-learning
– year: 2017
  ident: 10.1016/j.patrec.2024.07.013_b5
  article-title: HyperNetworks
– year: 2022
  ident: 10.1016/j.patrec.2024.07.013_b22
  article-title: Residual q-networks for value function factorizing in multiagent reinforcement learning
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
– volume: vol. 28
  year: 2015
  ident: 10.1016/j.patrec.2024.07.013_b10
  article-title: Learning structured output representation using deep conditional generative models
– volume: vol. 27
  year: 2014
  ident: 10.1016/j.patrec.2024.07.013_b2
  article-title: Generative adversarial nets
– year: 2023
  ident: 10.1016/j.patrec.2024.07.013_b14
– volume: vol. 97
  start-page: 5887
  year: 2019
  ident: 10.1016/j.patrec.2024.07.013_b25
  article-title: QTRAN: Learning to factorize with transformation for cooperative multi-agent reinforcement learning
– volume: vol. 30
  year: 2017
  ident: 10.1016/j.patrec.2024.07.013_b12
  article-title: Attention is all you need
– year: 2023
  ident: 10.1016/j.patrec.2024.07.013_b15
– start-page: 1651
  year: 2019
  ident: 10.1016/j.patrec.2024.07.013_b11
  article-title: Anomaly detection with conditional variational autoencoders
– year: 2018
  ident: 10.1016/j.patrec.2024.07.013_b27
– year: 2019
  ident: 10.1016/j.patrec.2024.07.013_b6
– year: 2018
  ident: 10.1016/j.patrec.2024.07.013_b7
– year: 2024
  ident: 10.1016/j.patrec.2024.07.013_b24
– volume: 34
  start-page: 16481
  year: 2021
  ident: 10.1016/j.patrec.2024.07.013_b18
  article-title: Self-supervised representation learning on neural network weights for model characteristic prediction
  publication-title: Adv. Neural Inf. Process. Syst.
– start-page: 4322
  year: 2020
  ident: 10.1016/j.patrec.2024.07.013_b8
  article-title: Anomaly detection for time series using vae-lstm hybrid model
– volume: vol. 37
  start-page: 1530
  year: 2015
  ident: 10.1016/j.patrec.2024.07.013_b3
  article-title: Variational inference with normalizing flows
– year: 2017
  ident: 10.1016/j.patrec.2024.07.013_b20
– year: 2021
  ident: 10.1016/j.patrec.2024.07.013_b16
– volume: 139
  start-page: 9030
  year: 2021
  ident: 10.1016/j.patrec.2024.07.013_b28
  article-title: On linear identifiability of learned representations
– year: 2019
  ident: 10.1016/j.patrec.2024.07.013_b13
– year: 2013
  ident: 10.1016/j.patrec.2024.07.013_b1
– volume: vol. 33
  start-page: 6840
  year: 2020
  ident: 10.1016/j.patrec.2024.07.013_b4
  article-title: Denoising diffusion probabilistic models
SSID ssj0006398
Score 2.4504
Snippet We explore a data-driven approach to generating neural network parameters to determine whether generative models can capture the underlying distribution of a...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 272
SubjectTerms Generative models
Multi-agent reinforcement learning
Neural networks
Parameter generation
Transformers
Title Generating neural architectures from parameter spaces for multi-agent reinforcement learning
URI https://dx.doi.org/10.1016/j.patrec.2024.07.013
Volume 185
WOSCitedRecordID wos001315824900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  issn: 0167-8655
  databaseCode: AIEXJ
  dateStart: 19950101
  customDbUrl:
  isFulltext: true
  dateEnd: 99991231
  titleUrlDefault: https://www.sciencedirect.com
  omitProxy: false
  ssIdentifier: ssj0006398
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELZWWw5w4FFAlJd84FYZxc7LOValCBCqVlCqPSBFjmOjVEta7UvlN_CnmbHjbJaiQg9colW08SaZb2fG9jffEPIKInptY7dnKCuWpJIzCCMF41aLLBWpThzL9_Rjfnwsp9NiMhr9DLUw61netvLysrj4r6aGc2BsLJ29gbn7QeEEfAajwxHMDsd_MrwXknZsZhSrRC2AwWbBwheUoOL3d2TC7INHQVIWsg0duZApLLbanxsnqard6mHoLfFtmMpOnDInVsN0FCTA0cwVBy02KFqqlYPQ4TmqQDU9Et8Y9rmZrV3meqrmq7aPDpPGV6l9UjbQ-Z2CpGMdTJsfq-E6hUh6Ila_dAkuGctgt31vOvSevotPF4iF7-1zxcf75Yaz17hZYFCFUiROf9XXtG5Lav8W6noCYuC2nZV-lBJHKaO8jLAD8o7I00KOyc7B-6Pphz6wQzIng1Q8PkioxHR0wat38-dMZ5C9nNwnd7tpBz3wcHlARqbdJfdCSw_aefhdcmegT_mQfN1giXos0S0sUcQS7bFEPZYo4IYOsES3sEQDlh6RL2-PTg7fsa4dB9OQpS-ZVVGu0rhO6owbUcXCphIivZR1ZjRXimOiYxQXNjIQNJPMVImtuDEZ13Uhk_gxGbfnrXlCqI3zgpsaXnJqYUZQqyrKtOK5xu4LmZR7JA6vrtSdVj22TJmV1xluj7D-qguv1fKX7-fBKmWXb_o8sgSoXXvl0xv-0jNye_OXeE7Gy_nKvCC39HrZLOYvO5z9Ao7sqCk
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Generating+neural+architectures+from+parameter+spaces+for+multi-agent+reinforcement+learning&rft.jtitle=Pattern+recognition+letters&rft.au=Artaud%2C+Corentin&rft.au=De-Silva%2C+Varuna&rft.au=Pina%2C+Rafael&rft.au=Shi%2C+Xiyu&rft.date=2024-09-01&rft.issn=0167-8655&rft.volume=185&rft.spage=272&rft.epage=278&rft_id=info:doi/10.1016%2Fj.patrec.2024.07.013&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_patrec_2024_07_013
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0167-8655&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0167-8655&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0167-8655&client=summon