Please use this identifier to cite or link to this item:
http://dspace.mediu.edu.my:8181/xmlui/handle/1721.1/7188| Title: | Factorial Hidden Markov Models |
| Keywords: | AI MIT Artificial Intelligence Hidden Markov Models sNeural networks Time series Mean field theory Gibbs sampling sFactorial Learning algorithms Machine learning |
| Issue Date: | 9-Oct-2013 |
| Description: | We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved analytically. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm. |
| URI: | http://koha.mediu.edu.my:8181/xmlui/handle/1721 |
| Other Identifiers: | AIM-1561 CBCL-130 http://hdl.handle.net/1721.1/7188 |
| Appears in Collections: | MIT Items |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
