Please use this identifier to cite or link to this item: http://dspace.mediu.edu.my:8181/xmlui/handle/1721.1/3851
Full metadata record
DC FieldValueLanguage
dc.creatorChang, Yu-Han-
dc.creatorHo, Tracey-
dc.creatorKaelbling, Leslie P.-
dc.date2003-12-13T18:55:17Z-
dc.date2003-12-13T18:55:17Z-
dc.date2004-01-
dc.date.accessioned2013-10-09T02:32:49Z-
dc.date.available2013-10-09T02:32:49Z-
dc.date.issued2013-10-09-
dc.identifierhttp://hdl.handle.net/1721.1/3851-
dc.identifier.urihttp://koha.mediu.edu.my:8181/xmlui/handle/1721-
dc.descriptionIn large multiagent games, partial observability, coordination, and credit assignment persistently plague attempts to design good learning algorithms. We provide a simple and efficient algorithm that in part uses a linear system to model the world from a single agent’s limited perspective, and takes advantage of Kalman filtering to allow an agent to construct a good training signal and effectively learn a near-optimal policy in a wide variety of settings. A sequence of increasingly complex empirical tests verifies the efficacy of this technique.-
dc.descriptionSingapore-MIT Alliance (SMA)-
dc.format1408858 bytes-
dc.formatapplication/pdf-
dc.languageen_US-
dc.relationComputer Science (CS);-
dc.subjectKalman filtering-
dc.subjectmulti-agent systems-
dc.subjectQ-learning-
dc.subjectreinforcement learning-
dc.titleAll learning is local: Multi-agent learning in global reward games-
dc.typeArticle-
Appears in Collections:MIT Items

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.