美文网首页
【无回溯RNN训练】Training recurrent net

【无回溯RNN训练】Training recurrent net

作者: hzyido | 来源:发表于2016-01-12 09:22 被阅读96次

    This prevents computing or even storing G(t) for moderately large-dimensional dynamical systems, such as recurrent neural networks.

1 The NoBackTrack algorithm

1.1 The rank-one trick: an expectation-preserving reduction

    We propose to build an approximation of G(t),The construction of an unbiased is based on the following “rank-one  trick”.

    The rank-one reductionA˜ depends, not only on the value of A, but also  on the way A is decomposed as a sum of rank-one terms. In the applications  to recurrent networks below, there is a natural such choice.

相关文章

网友评论

      本文标题:【无回溯RNN训练】Training recurrent net

      本文链接:https://www.haomeiwen.com/subject/crxckttx.html