ThiNet

作者: 信步闲庭v | 来源:发表于2017-10-23 22:43 被阅读44次

    Approach

    • Filter selection
      The key idea is: if we can use a subset of channels in layer (i + 1)’s input to approximate the output in layer i + 1, the other channels can be safely removed from the input of layer i + 1, the other channels can be safely removed from the input of layer i + 1.

    Here, |S| is the number of elements in a subset S, and r is a pre-defined compression rate. Now, given a set of m (the product of number of images and number of locations) training examples (x_i; y_i), the greedy algorithm is as follow.


    • Pruning
      Weak channels in layer (i + 1)’s input and their corresponding filters in layer i would be pruned away, leading to a much smaller model.
    • Fine-tuning
      Fine-tuning is a necessary step to recover the generalization ability damaged by filter pruning. Then iterate to step 1 to prune the next layer.

    Experiment

    ImageNet

    References:
    ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression, Jian-Hao Luo, 2017, ICCV

    相关文章

      网友评论

          本文标题:ThiNet

          本文链接:https://www.haomeiwen.com/subject/lghzuxtx.html