Please use this identifier to cite or link to this item: http://dspace.mediu.edu.my:8181/xmlui/handle/1721.1/7188
Full metadata record
DC FieldValueLanguage
dc.creatorGhahramani, Zoubin-
dc.creatorJordan, Michael I.-
dc.date2004-10-20T20:49:14Z-
dc.date2004-10-20T20:49:14Z-
dc.date1996-02-09-
dc.date.accessioned2013-10-09T02:48:31Z-
dc.date.available2013-10-09T02:48:31Z-
dc.date.issued2013-10-09-
dc.identifierAIM-1561-
dc.identifierCBCL-130-
dc.identifierhttp://hdl.handle.net/1721.1/7188-
dc.identifier.urihttp://koha.mediu.edu.my:8181/xmlui/handle/1721-
dc.descriptionWe present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved analytically. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm.-
dc.format7 p.-
dc.format198365 bytes-
dc.format244196 bytes-
dc.formatapplication/postscript-
dc.formatapplication/pdf-
dc.languageen_US-
dc.relationAIM-1561-
dc.relationCBCL-130-
dc.subjectAI-
dc.subjectMIT-
dc.subjectArtificial Intelligence-
dc.subjectHidden Markov Models-
dc.subjectsNeural networks-
dc.subjectTime series-
dc.subjectMean field theory-
dc.subjectGibbs sampling-
dc.subjectsFactorial-
dc.subjectLearning algorithms-
dc.subjectMachine learning-
dc.titleFactorial Hidden Markov Models-
Appears in Collections:MIT Items

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.