美文网首页iOS进阶
在iOS端部署深度学习

在iOS端部署深度学习

作者: 陆号 | 来源:发表于2018-05-02 17:37 被阅读11次

apple machine blog

1.原Core ML简介及实时目标检测及Caffe TensorFlow coremltools模型转换
2.CoreML学习——转换caffe模型并应用到 iOS App中

- (NSString *)predictImageScene:(UIImage *)image {
    GoogLeNetPlaces *model = [[GoogLeNetPlaces alloc] init];
    NSError *error;
    UIImage *scaledImage = [image scaleToSize:CGSizeMake(224, 224)];
    CVPixelBufferRef buffer = [image pixelBufferFromCGImage:scaledImage];
    GoogLeNetPlacesInput *input = [[GoogLeNetPlacesInput alloc] initWithSceneImage:buffer];
    GoogLeNetPlacesOutput *output = [model predictionFromFeatures:input error:&error];
    return output.sceneLabel;
}
-(void)prediction{
    Resnet50 *resnetModel = [[Resnet50 alloc] init];
    UIImage *image = showImg.image;
    // 从生成的类中加载 ML 模型,VNCoreMLModel 只是用于 Vision 请求的 Core ML 模型的容器
    //可以在 Vision 模型中包装任意的图像分析 Core ML 模型
    //标准的 Vision 工作流程是创建模型,创建一或多个请求,然后创建并运行请求处理程序
    //如下为创建模型
    VNCoreMLModel *vnCoreModel = [VNCoreMLModel modelForMLModel:resnetModel.model error:nil];
    //VNCoreMLRequest 是一个图像分析请求,它使用 Core ML 模型来完成工作
    //它的 completion handler 接收 request 和 error 对象
    VNCoreMLRequest *vnCoreMlRequest = [[VNCoreMLRequest alloc] initWithModel:vnCoreModel completionHandler:^(VNRequest * _Nonnull request, NSError * _Nullable error) {
        CGFloat confidence = 0.0f;
        VNClassificationObservation *tempClassification = nil;
        for (VNClassificationObservation *classification in request.results) {
            if (classification.confidence > confidence) {
                confidence = classification.confidence;
                tempClassification = classification;
            }
        }
        
        recognitionResultLabel.text = [NSString stringWithFormat:@"识别结果:%@",tempClassification.identifier];
        
        confidenceResult.text = [NSString stringWithFormat:@"匹配率:%@",@(tempClassification.confidence)];
    }];
    //创建并运行请求处理程序
    VNImageRequestHandler *vnImageRequestHandler = [[VNImageRequestHandler alloc] initWithCGImage:image.CGImage options:nil];
    
    NSError *error = nil;
    [vnImageRequestHandler performRequests:@[vnCoreMlRequest] error:&error];
    
    if (error) {
        NSLog(@"%@",error.localizedDescription);
    }
}

3.Custom Layers in Core ML
In this post I’ll show how to convert a Keras model with a custom layer to Core ML.

4.Real-time object detection with YOLO
In this blog post I’ll describe what it took to get the “tiny” version of YOLOv2 running on iOS using Metal Performance Shaders.
Of course I used Forge to build the iOS app. 😂 You can find the code in the YOLOfolder. To try it out: download or clone Forge, open Forge.xcworkspace in Xcode 8.3 or later, and run the YOLO target on an iPhone 6 or up.
On my iPhone 6s it takes about 0.15 seconds to process a single image. That is only 6 FPS, barely fast enough to call it realtime.

深度学习在 iOS 上的实践 —— 通过 YOLO 在 iOS 上实现实时物体检测
最近发布的 Caffe2 框架同样是通过 Metal 来实现在 iOS 上运行的。Caffe2-iOS 项目来自于迷你 YOLO 的一个版本。它似乎比纯 Metal 版本运行的慢 0.17 秒每帧。
YAD2K: Yet Another Darknet 2 Keras

5.如何用iOS10的MPS框架实现支持GPU的快速CNN计算
6.手把手教你用苹果CoreML实现iPhone的目标识别
7.A peek inside Core ML
Running realtime Inception-v3 on Core ML

8.Forge: a neural network toolkit for Metal

Forge is a collection of helper code that makes it a little easier to construct deep neural networks using Apple's MPSCNN framework.
Forge: neural network toolkit for Metal

MPS使用流程

iOS 9在MetalKit中新增了Metal Performance Shaders类,可以使用GPU进行高效的图像计算,比如高斯模糊,图像直方图计算,索贝尔边缘检测算法,实现深度学习等
Metal 介绍及基本使用
mps
inception-v3_demo
metal

Tips

Metal_debugger_tools https://developer.apple.com/library/content/documentation/Miscellaneous/Conceptual/MetalProgrammingGuide/Dev-Technique/Dev-Technique.html
https://developer.apple.com/videos/play/wwdc2015/610/

相关文章

网友评论

  • 进军明天:coreml 模型可以从后台动态加载吗, 因为模型比较大
    进军明天:@陆号 好的 谢了
    陆号:可以,参考这个例子 https://github.com/eugenebokhan/Awesome-ML

本文标题:在iOS端部署深度学习

本文链接:https://www.haomeiwen.com/subject/xjicrftx.html