美文网首页机器学习
转换CoreML模型笔记

转换CoreML模型笔记

作者: 梁间 | 来源:发表于2018-11-07 14:24 被阅读0次

    理论上多数深度学习模型都可以转换为CoreML模型,但实操中遇到各种坑,特记录于此。

    tensflow转coreML

    首先需要固化参数转换为pb模型
    saver=tf.train.Saver()
    # save graph definition somewhere
    tf.train.write_graph(sess.graph, '', './tf_graph.pbtxt')
    # save the weights
    saver.save(sess, './tf_model.ckpt')
    
    freeze_graph(input_graph='./tf_graph.pbtxt',
                 input_saver="",
                 input_binary=False,
                 input_checkpoint='./tf_model.ckpt',
                 output_node_names='ConvPred/ConvPred',  #输出
                 restore_op_name="save/restore_all",
                 filename_tensor_name="save/Const:0",
                 output_graph= 'unit_norm_graph.pb',
                 clear_devices=True,
                 initializer_nodes="")
    
    然后转换为mlmodel模型
    coreml_model = tfcoreml.convert(
          tf_model_path='unit_norm_graph.pb',
          mlmodel_path='unit_norm_graph.mlmodel',
          input_name_shape_dict={'Placeholder:0' : [1, 228, 304, 3]}, #输入
          output_feature_names=['ConvPred/ConvPred:0']) #输出
    
    确定输入输出op的name

    所有层

     original_graph = tf.get_default_graph()
            ops = original_graph.get_operations()
            N = len(ops)
            print(N)
            for i in range(1,N):
               print('\n\nop id {} : op type: "{}"\n op name: "{}"'.format(str(i), ops[i].type, ops[i].name));
               print('input(s):'),
               for x in ops[i].inputs:
                  print("name = {}, shape: {}, ".format(x.name, x.get_shape())),
                  print('\noutput(s):'),
               for x in ops[i].outputs:
                  print("name = {}, shape: {},".format(x.name, x.get_shape())),
    

    输出层(某些模型适用)

    pred = sess.run(net.get_output(), feed_dict={input_node: img})
    #这一句的第一个参数一般为输出层
    print(net.get_output())
    
    Unsupported Ops of type:

    Unpack,Pack,AddN,Tile

    转换失败模型原因汇总

    FCRN-DepthPrediction:
    Unsupported Ops of type: AddN,Pack

    SuperPoint:
    Unsupported Ops of type: Unpack,Pack

    light-weight-refinenet:
    Unsupported Upsampling bilinear mode

    pytorch-fcn:
    存在自定义操作

    maskrcnn:
    存在自定义操作

    3DDFA:
    不支持python3.6

    pytorch转coreML

    首先将pytorch模型转换为onnx模型
    torch.onnx.export(module, input, "xxx.onnx", verbose=True)
    
    然后将onnx模型转换为coreML模型
    coreml_model = convert(model, image_input_names=['0'], image_output_names=['186'])
    coreml_model.save(model_out)
    
    完整实例,pytorch densenet转onnx转coreML
    from torch.autograd import Variable
    import torch.onnx
    import torchvision
    import onnx
    import onnx_coreml
    
    dummy_input = Variable(torch.randn(1, 3, 224, 224))
    # Obtain your model, it can be also constructed in your script explicitly
    model = torchvision.models.densenet169(pretrained=True)
    # Invoke export
    torch.onnx.export(model, dummy_input, "densenet.onnx")
    # Load the ONNX model
    model = onnx.load("densenet.onnx")
    # Check that the IR is well formed
    onnx.checker.check_model(model)
    
    cml = onnx_coreml.convert(model,image_input_names=['0'])
    cml.save('output/densent.mlmodel')
    

    相关文章

      网友评论

        本文标题:转换CoreML模型笔记

        本文链接:https://www.haomeiwen.com/subject/oxpsxqtx.html