美文网首页
Do Deep Learning Models Have Too

Do Deep Learning Models Have Too

作者: 朱小虎XiaohuZhu | 来源:发表于2018-09-11 00:35 被阅读27次

    Léonard Blier (ENS), Yann Ollivier (FAIR)

    Abstract

    Deep learning models often have more parameters than observations, and still perform well. This is sometimes described as a paradox. In this work, we show experimentally that despite their huge number of parameters, deep neural networks can compress the data losslessly even when taking the cost of encoding the parameters into account. Such a compression viewpoint originally motivated the use of variational methods in neural networks [HV93, Sch97].

    However, we show that these variational methods provide surprisingly poor compression bounds, despite being explicitly built to minimize such bounds. This might explain the relatively poor practical performance of variational methods in deep learning. Better encoding methods, imported from the Minimum Description Length (MDL) toolbox, yield much better compression values on deep networks, corroborating the hypothesis that good compression on the training set correlates with good test performance.

    相关文章

      网友评论

          本文标题:Do Deep Learning Models Have Too

          本文链接:https://www.haomeiwen.com/subject/bhcygftx.html