Please use this identifier to cite or link to this item: http://dspace.mediu.edu.my:8181/xmlui/handle/1721.1/7202
Full metadata record
DC FieldValueLanguage
dc.creatorGhahramani, Zoubin-
dc.creatorJordan, Michael I.-
dc.date2004-10-20T20:49:37Z-
dc.date2004-10-20T20:49:37Z-
dc.date1995-01-24-
dc.date.accessioned2013-10-09T02:48:32Z-
dc.date.available2013-10-09T02:48:32Z-
dc.date.issued2013-10-09-
dc.identifierAIM-1509-
dc.identifierCBCL-108-
dc.identifierhttp://hdl.handle.net/1721.1/7202-
dc.identifier.urihttp://koha.mediu.edu.my:8181/xmlui/handle/1721-
dc.descriptionReal-world learning tasks often involve high-dimensional data sets with complex patterns of missing features. In this paper we review the problem of learning from incomplete data from two statistical perspectives---the likelihood-based and the Bayesian. The goal is two-fold: to place current neural network approaches to missing data within a statistical framework, and to describe a set of algorithms, derived from the likelihood-based framework, that handle clustering, classification, and function approximation from incomplete data in a principled and efficient manner. These algorithms are based on mixture modeling and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster, Laird, and Rubin 1977)---both for the estimation of mixture components and for coping with the missing data.-
dc.format11 p.-
dc.format388268 bytes-
dc.format515095 bytes-
dc.formatapplication/postscript-
dc.formatapplication/pdf-
dc.languageen_US-
dc.relationAIM-1509-
dc.relationCBCL-108-
dc.subjectAI-
dc.subjectMIT-
dc.subjectArtificial Intelligence-
dc.subjectmissing data-
dc.subjectmixture models-
dc.subjectstatistical learning-
dc.subjectEM algorithm-
dc.subjectmaximum likelihood-
dc.subjectneural networks-
dc.titleLearning from Incomplete Data-
Appears in Collections:MIT Items

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.