Please use this identifier to cite or link to this item: http://dspace.mediu.edu.my:8181/xmlui/handle/1721.1/7045
Title: Using Recurrent Networks for Dimensionality Reduction
Issue Date: 9-Oct-2013
Description: This report explores how recurrent neural networks can be exploited for learning high-dimensional mappings. Since recurrent networks are as powerful as Turing machines, an interesting question is how recurrent networks can be used to simplify the problem of learning from examples. The main problem with learning high-dimensional functions is the curse of dimensionality which roughly states that the number of examples needed to learn a function increases exponentially with input dimension. This thesis proposes a way of avoiding this problem by using a recurrent network to decompose a high-dimensional function into many lower dimensional functions connected in a feedback loop.
URI: http://koha.mediu.edu.my:8181/xmlui/handle/1721
Other Identifiers: AITR-1396
http://hdl.handle.net/1721.1/7045
Appears in Collections:MIT Items

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.