Please use this identifier to cite or link to this item:
http://dspace.mediu.edu.my:8181/xmlui/handle/1721.1/7206Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.creator | Jordan, Michael I. | - |
| dc.creator | Jacobs, Robert A. | - |
| dc.date | 2004-10-20T20:49:48Z | - |
| dc.date | 2004-10-20T20:49:48Z | - |
| dc.date | 1993-08-01 | - |
| dc.date.accessioned | 2013-10-09T02:48:33Z | - |
| dc.date.available | 2013-10-09T02:48:33Z | - |
| dc.date.issued | 2013-10-09 | - |
| dc.identifier | AIM-1440 | - |
| dc.identifier | CBCL-083 | - |
| dc.identifier | http://hdl.handle.net/1721.1/7206 | - |
| dc.identifier.uri | http://koha.mediu.edu.my:8181/xmlui/handle/1721 | - |
| dc.description | We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. | - |
| dc.format | 29 p. | - |
| dc.format | 190144 bytes | - |
| dc.format | 678911 bytes | - |
| dc.format | application/octet-stream | - |
| dc.format | application/pdf | - |
| dc.language | en_US | - |
| dc.relation | AIM-1440 | - |
| dc.relation | CBCL-083 | - |
| dc.subject | supervised learning | - |
| dc.subject | statistics | - |
| dc.subject | decision trees | - |
| dc.subject | neuralsnetworks | - |
| dc.title | Hierarchical Mixtures of Experts and the EM Algorithm | - |
| Appears in Collections: | MIT Items | |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
