Please use this identifier to cite or link to this item:
http://dspace.mediu.edu.my:8181/xmlui/handle/1721.1/7093Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.creator | Steinbach, Carl | - |
| dc.date | 2004-10-20T20:29:42Z | - |
| dc.date | 2004-10-20T20:29:42Z | - |
| dc.date | 2002-05-01 | - |
| dc.date.accessioned | 2013-10-09T02:48:21Z | - |
| dc.date.available | 2013-10-09T02:48:21Z | - |
| dc.date.issued | 2013-10-09 | - |
| dc.identifier | AITR-2002-007 | - |
| dc.identifier | http://hdl.handle.net/1721.1/7093 | - |
| dc.identifier.uri | http://koha.mediu.edu.my:8181/xmlui/handle/1721 | - |
| dc.description | We describe an adaptive, mid-level approach to the wireless device power management problem. Our approach is based on reinforcement learning, a machine learning framework for autonomous agents. We describe how our framework can be applied to the power management problem in both infrastructure and ad~hoc wireless networks. From this thesis we conclude that mid-level power management policies can outperform low-level policies and are more convenient to implement than high-level policies. We also conclude that power management policies need to adapt to the user and network, and that a mid-level power management framework based on reinforcement learning fulfills these requirements. | - |
| dc.format | 41 p. | - |
| dc.format | 8457203 bytes | - |
| dc.format | 989455 bytes | - |
| dc.format | application/postscript | - |
| dc.format | application/pdf | - |
| dc.language | en_US | - |
| dc.relation | AITR-2002-007 | - |
| dc.subject | AI | - |
| dc.subject | reinforcement learning | - |
| dc.subject | power management | - |
| dc.subject | wireless networks | - |
| dc.title | A Reinforcement-Learning Approach to Power Management | - |
| Appears in Collections: | MIT Items | |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
