美文网首页
深度学习 建模流程总结

深度学习 建模流程总结

作者: 大鴈 | 来源:发表于2018-06-03 22:40 被阅读13次
    1. 环境配置:

      硬件:GPU、CPU

      软件:Ubuntu、TensorFlow-GPU版本、Seaborn、Matplotlib

      要求:TF的版本对GPU有版本匹配要求

      tips:

      # 命令行查看GPU使用情况
      watch -n 10 nvidia-sim
      # 10秒 刷新一次,
      nvidia-sim
      

      AWS学术服务器的申请和使用技巧。

    2. 建模流程:

      1. 选择神经网络框架:Keras、TensorFlow、Pytorch...
      2. 根据所选的框架,处理现有数据,以适配框架数据类型。巧用panda、sklearn对数据集进行读取(read_scv)、分配(train_test_split)、缺失值处理,用seaborn、matplotlib对数据可视化,辅助直觉判断。
      3. 建立模型
      4. 模型性能评估
      5. 特别纠错
      6. 使用模型预测并生成结果
    3. 开发Tip:

      1. 巧用lib:sklearn、seaborn、matplotlib、panda

    Useful QA:

    1. Q: Just one small question why do you take batch_size to be 86 ? Is it just random value or does it changes something to the result ?

      A: It would be really interesting to hear from author about this. But I believe you will be able to get pretty the same results if you choose 64 or 32 or 128 as batch size. And may be it will be even run faster because of CPU optimizations...

      A: Batch size is mainly a constraint on your own computer. The larger the batch size, the more data your chunking into your memory that your model will train on. The small the batch size, the less data, but your computation will be slower.

      It's a tradeoff between speed and memory.

    2. Q: in my first try I use:

      In -> [ Conv2D (3,3) -> relu -> MaxPool2D ]*2 -> Conv2D (3,3) -> relu -> MaxPool2D --> Flatten -> Dense -> Dropout -> Out

      (I've got a good accuracy in cat&dogs competition with this architecture) and the accuracy was 0.95

      How can we know a good architecture of the CNN for any type of problem?

      A: There are many Convolution neural networks models proposed in many papers . Every model gives better accuracy than the one before it . that is Alex net performs better than lenet and googLeNet is better than AlexNet but in general with some error analysis and trials you should find the number of layers and the architecture that will fit the task

      Q: So, when you face one new image problem, how do beginners start their neural network? Any suggestions for a starting architecture? Thanks

      A: You may try to find a paper or an algorithm that was proven to work well for similar tasks. you then try to modify it to fit your task according to the results you get from the algorithm. You may also consider not to reinvent the wheel by implementing the algorithm from scratch, Instead you may use one of well known algorithm used in ImageNet or other challenges like VGG-16 , VGG-19 or yolo depending on the Task. Transfer learning makes it easier for the training process as the algorithm will be pre-trained but you will have to decide how many layers you want to freeze according to the training data you have.

    3. Q:

      1. Accuracy seems to be lower than validation accuracy. Is this due to the fact that training data is augmented and thus harder to identify than validation data?
      2. You chose In -> [[Conv2D->relu]*2 -> MaxPool2D -> Dropout]*2 -> Flatten -> Dense -> Dropout -> Out as your CNN structure. Could you provide some reasoning for laying 2 Conv2D layers before max pooling? Why is this structure better than In -> [ Conv2D-> relu -> MaxPool2D -> Dropout]*2 -> Flatten -> Dense -> Dropout -> Out

      A:

      1. Yes exaclty !

      ​ 2) This dataset is composed of digits images of the same small size. Images are somewhat aleardy normalized. So we are facing an easy problem. No need of very deep networks.
      It is better to add consecutively Conv+relu layers followed by maxpool layer. With this technique you increase exponentially the number of filters. Take a look at Google LeNet or VGG16/19 network , they are very deep networks but very well build to better extract features from images.

    相关文章

      网友评论

          本文标题:深度学习 建模流程总结

          本文链接:https://www.haomeiwen.com/subject/ctmssftx.html