Please use this identifier to cite or link to this item:
http://dspace.mediu.edu.my:8181/xmlui/handle/1721.1/3851
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.creator | Chang, Yu-Han | - |
dc.creator | Ho, Tracey | - |
dc.creator | Kaelbling, Leslie P. | - |
dc.date | 2003-12-13T18:55:17Z | - |
dc.date | 2003-12-13T18:55:17Z | - |
dc.date | 2004-01 | - |
dc.date.accessioned | 2013-10-09T02:32:49Z | - |
dc.date.available | 2013-10-09T02:32:49Z | - |
dc.date.issued | 2013-10-09 | - |
dc.identifier | http://hdl.handle.net/1721.1/3851 | - |
dc.identifier.uri | http://koha.mediu.edu.my:8181/xmlui/handle/1721 | - |
dc.description | In large multiagent games, partial observability, coordination, and credit assignment persistently plague attempts to design good learning algorithms. We provide a simple and efficient algorithm that in part uses a linear system to model the world from a single agent’s limited perspective, and takes advantage of Kalman filtering to allow an agent to construct a good training signal and effectively learn a near-optimal policy in a wide variety of settings. A sequence of increasingly complex empirical tests verifies the efficacy of this technique. | - |
dc.description | Singapore-MIT Alliance (SMA) | - |
dc.format | 1408858 bytes | - |
dc.format | application/pdf | - |
dc.language | en_US | - |
dc.relation | Computer Science (CS); | - |
dc.subject | Kalman filtering | - |
dc.subject | multi-agent systems | - |
dc.subject | Q-learning | - |
dc.subject | reinforcement learning | - |
dc.title | All learning is local: Multi-agent learning in global reward games | - |
dc.type | Article | - |
Appears in Collections: | MIT Items |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.