Communication for Improving Policy Computation in Distributed POMDPs

Distributed Partially Observable Markov Decision Problems (POMDPs) are emerging as a popular approach for modeling multiagent teamwork where a group of agents work together to jointly maximize a reward function. Since the problem of finding the optimal joint policy for a distributed POMDP has been s...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Autonomous Agents and Multiagent Systems: Proceedings, 3rd International Joint Conference, New York City, New York, 2004. s. 1098 - 1105
Hlavní autori: Nair, Ranjit, Roth, Maayan, Yohoo, Makoto
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: Washington, DC, USA IEEE Computer Society 19.07.2004
Edícia:ACM Conferences
Predmet:
ISBN:9781581138641, 1581138644
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Distributed Partially Observable Markov Decision Problems (POMDPs) are emerging as a popular approach for modeling multiagent teamwork where a group of agents work together to jointly maximize a reward function. Since the problem of finding the optimal joint policy for a distributed POMDP has been shown to be NEXP-Complete if no assumptions are made about the domain conditions, several locally optimal approaches have emerged as a viable solution. However, the use of communicative actions as part of these locally optimal algorithms has been largely ignored or has been applied only under restrictive assumptions about the domain. In this paper, we show how communicative acts can be explicitly introduced in order to find locally optimal joint policies that allow agents to coordinate better through synchronization achieved via communication. Furthermore, the introduction of communication allows us to develop a novel compact policy representation that results in savings of both space and time which are verified empirically. Finally, through the imposition of constraints on communication such as not going without communicating for more than K steps, even greater space and time savings can be obtained.
AbstractList Distributed Partially Observable Markov Decision Problems (POMDPs) are emerging as a popular approach for modeling multiagent teamwork where a group of agents work together to jointly maximize a reward function. Since the problem of finding the optimal joint policy for a distributed POMDP has been shown to be NEXP-Complete if no assumptions are made about the domain conditions, several locally optimal approaches have emerged as a viable solution. However, the use of communicative actions as part of these locally optimal algorithms has been largely ignored or has been applied only under restrictive assumptions about the domain. In this paper, we show how communicative acts can be explicitly introduced in order to find locally optimal joint policies that allow agents to coordinate better through synchronization achieved via communication. Furthermore, the introduction of communication allows us to develop a novel compact policy representation that results in savings of both space and time which are verified empirically. Finally, through the imposition of constraints on communication such as not going without communicating for more than K steps, even greater space and time savings can be obtained.
Author Nair, Ranjit
Yohoo, Makoto
Roth, Maayan
Author_xml – sequence: 1
  givenname: Ranjit
  surname: Nair
  fullname: Nair, Ranjit
  organization: University of Southern California
– sequence: 2
  givenname: Maayan
  surname: Roth
  fullname: Roth, Maayan
  organization: Carnegie Mellon University
– sequence: 3
  givenname: Makoto
  surname: Yohoo
  fullname: Yohoo, Makoto
  organization: Kyushu University
BookMark eNqNkEFLwzAYhgMqqLNnrz2Jl9V8TdqkR9mcDibrYfeQNKlE22Q2reC_N6P7Ab6XFz4eXj6eW3TpvDMI3QPOipgnwMApQHZqzvgFSirGoeAAhJcUrlESwieOoQXQPL9B65Xv-8nZRo7Wu7T1Q7rtj4P_se4jrX1nm980IsdpnAHr0rUN42DVNBqd1vv3dR3u0FUru2CScy_QYfNyWL0td_vX7ep5t5TAYFwS4NASWbKKE9wqpWVBmnggkOfADVXESEIVrQpTMU2ZoprooqQ0l5JpTRboYZ6N_31PJoyit6ExXSed8VMQBHCV4xJH8HEGZdML5f1XEIDFSZA4CxJnQRHN_okKNVjTkj_tMWfJ
ContentType Conference Proceeding
DBID 7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.5555/1018411.1018878
DatabaseName Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList Computer and Information Systems Abstracts

DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EndPage 1105
Genre Conference Paper
GroupedDBID 6IE
6IK
6IL
AAJGR
AAVQY
ACM
ADPZR
ALMA_UNASSIGNED_HOLDINGS
APO
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
GUFHI
OCL
RIB
RIC
RIE
RIL
7SC
8FD
AAWTH
JQ2
L7M
LHSKQ
L~C
L~D
ID FETCH-LOGICAL-a171t-3181f3a679830fbbda53cf3a312218e4b3ea34b495e97d47b4d3d56442aa7dd3
ISBN 9781581138641
1581138644
IngestDate Fri Jul 11 05:24:52 EDT 2025
Wed Jan 31 06:40:05 EST 2024
Wed Jan 31 06:51:01 EST 2024
IsPeerReviewed false
IsScholarly false
Language English
LinkModel OpenURL
MeetingName AAMAS04: The Third International Joint Conference on Autonomous Agents and Multi-Agent Systems 2004
MergedId FETCHMERGED-LOGICAL-a171t-3181f3a679830fbbda53cf3a312218e4b3ea34b495e97d47b4d3d56442aa7dd3
Notes SourceType-Conference Papers & Proceedings-1
ObjectType-Conference Paper-1
content type line 25
PQID 31092060
PQPubID 23500
PageCount 8
ParticipantIDs acm_books_10_5555_1018411_1018878_brief
acm_books_10_5555_1018411_1018878
proquest_miscellaneous_31092060
PublicationCentury 2000
PublicationDate 20040719
20040701
PublicationDateYYYYMMDD 2004-07-19
2004-07-01
PublicationDate_xml – month: 07
  year: 2004
  text: 20040719
  day: 19
PublicationDecade 2000
PublicationPlace Washington, DC, USA
PublicationPlace_xml – name: Washington, DC, USA
PublicationSeriesTitle ACM Conferences
PublicationTitle Autonomous Agents and Multiagent Systems: Proceedings, 3rd International Joint Conference, New York City, New York, 2004.
PublicationYear 2004
Publisher IEEE Computer Society
Publisher_xml – name: IEEE Computer Society
SSID ssj0000451422
Score 1.4678913
Snippet Distributed Partially Observable Markov Decision Problems (POMDPs) are emerging as a popular approach for modeling multiagent teamwork where a group of agents...
SourceID proquest
acm
SourceType Aggregation Database
Publisher
StartPage 1098
SubjectTerms Computing methodologies -- Artificial intelligence -- Distributed artificial intelligence -- Cooperation and coordination
Computing methodologies -- Artificial intelligence -- Distributed artificial intelligence -- Multi-agent systems
Mathematics of computing -- Probability and statistics -- Probabilistic representations -- Markov networks
Mathematics of computing -- Probability and statistics -- Stochastic processes -- Markov processes
Theory of computation -- Theory and algorithms for application domains -- Machine learning theory -- Markov decision processes
Title Communication for Improving Policy Computation in Distributed POMDPs
URI https://www.proquest.com/docview/31092060
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELa2FQdOQCliKVAjIXGIIuLYeR0rlqoHdlmhFSqnyI4dmgJJtZtW5V_wk5mJ89q2EnAghyhrRc5mvsl43ibkdSBDLTKh3DgH20SYnLlwJ3dj6Ycy9rRUpukz-yFaLOLT02Q5mfzqamGuvkdlGV9fJxf_FWoYA7CxdPYf4O4nhQG4BtDhDLDD-YZGfOfis-wHN138f3VWrPUt319R1qN6PwwaHF3WWOKASbFHX5vKtyYLA1MOJf7u2ps7rvO5EWoOH3PcVq1Jk744eCxs-2HH7iEhuwTLGXbtxQ23QOtdfpzPlr2Cv5CFTfyW5XnRZ-Z8qqwXaC7lz4Grv1RnVWWHv1V1ZeUk9m_eAKPNR--47eIQ6DsdCVI0e51uk4sum3XLDmZBzBiPQ9tDq5XFzLP7W7frOug5wV1rRgBH474AW5cx9GaA3I13yE4UebYesPfZYS8e4aNx3z9QtG3D-j9g20fhpG9vTIn6T_bj1prfKDKrh2R_IAcdWOURmZhyjzzoXp-2Iv8xmW2hSgFV2qNKLap0hCotSjpClVpU98nq-P3q3Ynb7rzhShaxGkvqWc4lRui4lyulZcAzGODMB5XQCMWN5EKBcW2SSItICc11ANTwpYy05k_IblmV5imhOc-CRAiD4VkRC08po8JAaZV4gmVCTskrIEqK388mBYMUCZe2hEtbwk3Jmz_ekypgq3xKDjvipiAmMfYlSwOfTYoNcH0v9J79xfMOyP2BC5-T3Xp9aV6Qe9lVXWzWLxuW-A3o8IKL
linkProvider IEEE
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=Proceedings+of+the+Third+International+Joint+Conference+on+Autonomous+Agents+and+Multiagent+Systems+-+Volume+3&rft.atitle=Communication+for+Improving+Policy+Computation+in+Distributed+POMDPs&rft.au=Nair%2C+Ranjit&rft.au=Roth%2C+Maayan&rft.au=Yohoo%2C+Makoto&rft.series=ACM+Conferences&rft.date=2004-07-19&rft.pub=IEEE+Computer+Society&rft.isbn=9781581138641&rft.spage=1098&rft.epage=1105&rft_id=info:doi/10.5555%2F1018411.1018878
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=9781581138641/lc.gif&client=summon&freeimage=true
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=9781581138641/mc.gif&client=summon&freeimage=true
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=9781581138641/sc.gif&client=summon&freeimage=true