美文网首页
强脑-项目总结

强脑-项目总结

作者: 歌手的剑 | 来源:发表于2019-06-12 17:38 被阅读0次

    强脑

    Getting start

    CocoaPods

    Update SDK:

    $ cd BrainProject
    $ pod install
    

    Usage

    • Use Command + B to build project.
    • Use Command + R to run project.

    简介

    强脑项目

    整体架构

    • 项目框架:使用 UIKit 框架搭建。
    • 网络模块:使用针对
      AFNetworking
      进行再次封装实现数据请求,使用YYModel实现json/Model映射。
    • UI布局方式:StoryBoard/Xib和代码+Masonry两种方式。
    • 持久缓存:根据不同场景使用UserDefaults+NSCoding对象归档plist存储+Realm等。
    • 设计模式:MVC模式。

    文件目录概况

    | ___ BrainProject
    | | ___ AppDelegate.m(项目初始化)
    | | ___ Request(网络模块相关)
    | | | ___ CommonRequest(公用请求部分:封装了上传图片/视频等类方法)
    | | | ___ BaseRequest(封装全局get/post请求方法,及域名统一管理)
    | | | | __ KMRequestApi.h(全局域名管理,打包时需要注意请求的服务器地址)
    | | | ___ Q_RAYRequest(Q_RAY部分网络请求部分:...)
    | | | ___ ShoesPlanRequest(ShoesPlan部分网络请求部分:...)
    | | | ___ TrainClassRequest(TrainClass部分网络请求部分:...)
    | | | ___ Brain(Brain部分网络请求部分:...)
    | | | ___ Shoes(Shoes部分网络请求部分:...)
    | | | ___ Robot(Robot部分网络请求部分:...)
    | | | ___ Login(Login部分网络请求部分:手机号登录/验证码登录/第三方登录/注册/找回密码/退出登录/...)
    | | ___ Helpers
    | | | ___ 待续...
    | | ___ Protocol
    | | | ___ 待续...
    | | ___ Lib(使用的三方框架,单文件拖入项目中维护或者个别第三方库不支持pod导入及升级才手动导入)
    | | | ___ OpenCV
    | | | ___ SwiftyJSON
    | | | ___ FDFullscreenPopGesture
    | | | ___ MobPush
    | | | ___ ShareSDK
    | | ___ Extension(系统框架功能扩展)
    | | ___ Class
    | | ___ Sources
    | | ___ Tools
    | | ___ BaseModule
    | | ___ Common
    

    第三方库使用

    项目使用 CocoaPods 管理使用三方框架。

    建议: 网络,图片加载,缓存等基础组件库可以使用,跟 UI 相关尽量只做参考不要引入项目导致不便于维护与更新。

    // 目前项目中使用到的第三方库
        pod 'RealReachability'
        pod 'MJRefresh'
        pod 'SDWebImage'
        pod 'YYCategories'
        pod 'Masonry'
        pod 'MBProgressHUD'
        pod 'AFNetworking'
        pod 'YYModel'
        pod 'Bugly'
        pod 'Realm'
        pod 'YYText'
        pod 'YYCache'
        pod 'ZFPlayer', '~> 3.0'
        pod 'ZFPlayer/ControlView', '~> 3.0'
        pod 'ZFPlayer/AVPlayer', '~> 3.0'
        pod 'TZImagePickerController'
    

    Realm本地数据库使用说明:

    • 安装realm;
    使用cocoapods:
    pod cache clean Realm
    pod cache clean RealmSwift
    pod deintegrate || rm -rf Pods
    pod install --verbose
    rm -rf ~/Library/Developer/Xcode/DerivedData
    
    使用Carthage:
    rm -rf Carthage
    rm -rf ~/Library/Developer/Xcode/DerivedData
    carthage update
    
    • 构建继承自RLMObject的模型(注意设置主键及oc和swift的基本数据类型转换问题);
    • 增删改查:
        // 增
        RLMRealm *realm = [RLMRealm defaultRealm];
        [realm transactionWithBlock:^{
            [realm addOrUpdateObject:thispage];
        }];
        
        // 删
        [realm transactionWithBlock:^{
            [realm deleteObject:thispage];
        }];
    
        // 改
        QNPageModel *thisrepage = [[QNPageModel allObjects] firstObject];
        [realm transactionWithBlock:^{
            thisrepage.pagenum = 1000000;
        }];
    
    
        // 异步修改
        // Query and update the result in another thread
        dispatch_async(dispatch_queue_create("background", 0), ^{
            @autoreleasepool {
                QNPageModel *thisrepage2 = [[QNPageModel allObjects] firstObject];
                RLMRealm *realm = [RLMRealm defaultRealm];
                [realm beginWriteTransaction];
                thisrepage2.pagenum = 3;
                [realm commitWriteTransaction];
            }
        });
    
        // 查 单个对象
        QNPageModel *thisrepage1 = [[QNPageModel allObjects] firstObject];
        NSLog(@"%@", thisrepage1);
    
        // 查 一组对象
        RLMResults<QNNoteModel *> *allnotes = [QNNoteModel allObjects];
        NSLog(@"%@", allnotes);
    
        // 查 nid == 1的对象
        QNPageModel *nid1model= [QNNoteModel objectsWhere:@"nid == 1"].firstObject;
        NSLog(@"%@", nid1model);
    
    
    • 模型属性变更涉及到的数据库变更
    场景1: 旧版本的模型firstName和lastName是两个属性,新版本由于接口升级合并成fullName属性。
    RLMRealmConfiguration *config = [RLMRealmConfiguration defaultConfiguration];
    config.schemaVersion = 1;
    config.migrationBlock = ^(RLMMigration *migration, uint64_t oldSchemaVersion) {
        // We haven’t migrated anything yet, so oldSchemaVersion == 0
        if (oldSchemaVersion < 1) {
            // The enumerateObjects:block: method iterates
            // over every 'Person' object stored in the Realm file
            [migration enumerateObjects:Person.className
                                  block:^(RLMObject *oldObject, RLMObject *newObject) {
    
            // combine name fields into a single field
            newObject[@"fullName"] = [NSString stringWithFormat:@"%@ %@",
                                          oldObject[@"firstName"],
                                          oldObject[@"lastName"]];
            }];
        }
    };
    [RLMRealmConfiguration setDefaultConfiguration:config];
    
    场景2:属性重命名(旧版本的模型age属性,新版本由于接口升级改成yearsSinceBirth属性)。
    RLMRealmConfiguration *config = [RLMRealmConfiguration defaultConfiguration];
    config.schemaVersion = 1;
    config.migrationBlock = ^(RLMMigration *migration, uint64_t oldSchemaVersion) {
        // We haven’t migrated anything yet, so oldSchemaVersion == 0
        if (oldSchemaVersion < 1) {
            // The renaming operation should be done outside of calls to `enumerateObjects:`.
            [migration renamePropertyForClass:Person.className oldName:@"yearsSinceBirth" newName:@"age"];
        }
    };
    [RLMRealmConfiguration setDefaultConfiguration:config];
    
    
    • 数据库线性迁移
    • 实时同步
    • 冲突解决
    • 通知
    // Observe Realm Notifications
    token = [realm addNotificationBlock:^(NSString *notification, RLMRealm * realm) {
        [myViewController updateUI];
    }];
    
     // 接收通知之后,局部刷新UI而不是重新加载所有内容
    - (void)viewDidLoad {
        [super viewDidLoad];
    
        // Observe RLMResults Notifications
        __weak typeof(self) weakSelf = self;
        self.notificationToken = [[Person objectsWhere:@"age > 5"] 
          addNotificationBlock:^(RLMResults<Person *> *results, RLMCollectionChange *changes, NSError *error) {
            
            if (error) {
                NSLog(@"Failed to open Realm on background worker: %@", error);
                return;
            }
    
            UITableView *tableView = weakSelf.tableView;
            // Initial run of the query will pass nil for the change information
            if (!changes) {
                [tableView reloadData];
                return;
            }
    
            // Query results have changed, so apply them to the UITableView
            [tableView beginUpdates];
            [tableView deleteRowsAtIndexPaths:[changes deletionsInSection:0]
                             withRowAnimation:UITableViewRowAnimationAutomatic];
            [tableView insertRowsAtIndexPaths:[changes insertionsInSection:0]
                             withRowAnimation:UITableViewRowAnimationAutomatic];
            [tableView reloadRowsAtIndexPaths:[changes modificationsInSection:0]
                             withRowAnimation:UITableViewRowAnimationAutomatic];
            [tableView endUpdates];
        }];
    }
    
    - (void)dealloc {
        [self.notificationToken invalidate];
    }
    

    SDWebImage网络图片加载及缓存库 底层实现原理及内部实现过程:

    • How To Use;
    Objective-C:
    #import <SDWebImage/SDWebImage.h>
    ...
    [imageView sd_setImageWithURL:[NSURL URLWithString:@"http://www.domain.com/path/to/image.jpg"]
                 placeholderImage:[UIImage imageNamed:@"placeholder.png"]];
                 
    Swift:
    import SDWebImage
    
    imageView.sd_setImage(with: URL(string: "http://www.domain.com/path/to/image.jpg"), placeholderImage: UIImage(named: "placeholder.png"))
    
    • options选项:
          //失败后重试
             SDWebImageRetryFailed = 1 << 0,
              
             //UI交互期间开始下载,导致延迟下载比如UIScrollView减速。
             SDWebImageLowPriority = 1 << 1,
              
             //只进行内存缓存
             SDWebImageCacheMemoryOnly = 1 << 2,
              
             //这个标志可以渐进式下载,显示的图像是逐步在下载
             SDWebImageProgressiveDownload = 1 << 3,
              
             //刷新缓存
             SDWebImageRefreshCached = 1 << 4,
              
             //后台下载
             SDWebImageContinueInBackground = 1 << 5,
              
             //NSMutableURLRequest.HTTPShouldHandleCookies = YES;
              
             SDWebImageHandleCookies = 1 << 6,
              
             //允许使用无效的SSL证书
             //SDWebImageAllowInvalidSSLCertificates = 1 << 7,        
             //优先下载
             SDWebImageHighPriority = 1 << 8,
              
             //延迟占位符
             SDWebImageDelayPlaceholder = 1 << 9,
              
             //改变动画形象
             SDWebImageTransformAnimatedImage = 1 << 10 
    
    • 实现流程:

    1.【setImageWithURL:placeholderImage:options:】-> 显示placeholderImage,然后根据url开始处理图片;

    2.【SDWebImageManager-downloadWithURL:delegate:options:userInfo:】-> SDImageCache从缓存查找图片是否已经下载,如果内存缓存中存在则直接回调到SDWebImageManager用于展示;

    3.【NSInvocationOperation】-> 生成 NSInvocationOperation 添加到队列开始从硬盘查找图片是否已经缓存;

    4.【imageCache:didNotFindImageForKey:userInfo】-> 如果硬盘找不到,则回调notfind;

    5.【SDWebImageDownloader】-> 由NSURLConnection来实现图片下载,connection:didReceiveData: 中利用 ImageIO 做了按图片下载进度加载效果;

    6.下载完成的图片在一个NSOperationQueue中做图片解码处理,不会拖慢主线程UI;

    7.解码完成,回调显示;

    8.【SDImageCache】图片内存缓存和硬盘缓存同时保存,在一个单独的NSInvocationOperation完成,避免拖慢主线程;


    音视频合成及编解码实现过程:

    • How To Use;
    
    


    富文本模块(录音/插入图片视频):

    • How To Use;
    
    


    蓝牙模块:

    • How To Use;
    
    


    3d模块:

    • How To Use;
    
    


    OpenCV及CoreImage模块-矩形识别/裁剪/模糊匹配:

    CoreImage实现矩形识别,实现步骤如下:
    • CoreImage 下CIDetector.h自带了四种识别功能
    /* 人脸识别 */
    CORE_IMAGE_EXPORT NSString* const CIDetectorTypeFace NS_AVAILABLE(10_7, 5_0);
    
    /* 矩形边缘识别 */
    CORE_IMAGE_EXPORT NSString* const CIDetectorTypeRectangle NS_AVAILABLE(10_10, 8_0);
    
    /* 二维码识别 */
    CORE_IMAGE_EXPORT NSString* const CIDetectorTypeQRCode NS_AVAILABLE(10_10, 8_0);
    
    /* 文本识别 */
    #if __OBJC2__
    CORE_IMAGE_EXPORT NSString* const CIDetectorTypeText NS_AVAILABLE(10_11, 9_0);
    
    • 边缘检测:
    [CIDetector detectorOfType:CIDetectorTypeRectangle context:nil options:@{CIDetectorAccuracy : CIDetectorAccuracyHigh}];
    
    • 使用CAShapeLayer将边缘绘制并显示:
    // 将图像空间的坐标系转换成uikit坐标系
    TransformCIFeatureRect featureRect = [self transfromRealRectWithImageRect:imageRect topLeft:topLeft topRight:topRight bottomLeft:bottomLeft bottomRight:bottomRight];
    // 边缘识别路径
    UIBezierPath *path = [UIBezierPath new];
    [path moveToPoint:featureRect.topLeft];
    [path addLineToPoint:featureRect.topRight];
    [path addLineToPoint:featureRect.bottomRight];
    [path addLineToPoint:featureRect.bottomLeft];
    [path closePath];
    // 背景遮罩路径
    UIBezierPath *rectPath  = [UIBezierPath bezierPathWithRect:CGRectMake(-5,-5,self.frame.size.width + 10,self.frame.size.height + 10)];
    [rectPath setUsesEvenOddFillRule:YES];
    [rectPath appendPath:path];
    _rectOverlay.path = rectPath.CGPath;
    
    使用CGImage-CGImageCreateWithImageInRect实现图片裁剪,实现方式如下:
        // imageRef
        CGImageRef imageRef = image.CGImage;
        
        // 传入原图片的imageRef以及rect获取一个新的CGImageRef
        CGImageRef topimageref = CGImageCreateWithImageInRect(imageRef, toprect);
    
        // 使用新的topimageref生成一个UIImageView
        UIImageView *topImage = [[UIImageView alloc]initWithImage:[[UIImage alloc] initWithCGImage:topimageref]];
    
    使用OpenCV实现图片匹配,实现方式如下:
    #pragma mark - 生成一张标记目标的图片
    + (UIImage *)imageWithColor:(UIColor *)rectColor size:(CGSize)size rectArray:(NSArray *)rectArray{
        
        CGRect rect = CGRectMake(0, 0, size.width, size.height);
        
        // 1.开启图片的图形上下文
        UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
        
        // 2.获取
        CGContextRef cxtRef = UIGraphicsGetCurrentContext();
        
        // 3.矩形框标记颜色
        //获取目标位置
        for (NSInteger i = 0; i < rectArray.count; i++) {
            NSValue *rectValue = rectArray[i];
            CGRect targetRect = rectValue.CGRectValue;
            UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:targetRect cornerRadius:5];
            //加路径添加到上下文
            CGContextAddPath(cxtRef, path.CGPath);
            [rectColor setStroke];
            [[UIColor clearColor] setFill];
            //渲染上下文里面的路径
            /**
             kCGPathFill,   填充
             kCGPathStroke, 边线
             kCGPathFillStroke,  填充&边线
             */
            CGContextDrawPath(cxtRef,kCGPathFillStroke);
        }
        
        //填充透明色
        CGContextSetFillColorWithColor(cxtRef, [UIColor clearColor].CGColor);
        
        CGContextFillRect(cxtRef, rect);
        
        // 4.获取图片
        UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
        
        // 5.关闭图形上下文
        UIGraphicsEndImageContext();
        
        // 6.返回图片
        return img;
    }
    
    #pragma mark - 将CMSampleBufferRef转为cv::Mat
    +(cv::Mat)bufferToMat:(CMSampleBufferRef) sampleBuffer{
        CVImageBufferRef imgBuf = CMSampleBufferGetImageBuffer(sampleBuffer);
        
        //锁定内存
        CVPixelBufferLockBaseAddress(imgBuf, 0);
        // get the address to the image data
        void *imgBufAddr = CVPixelBufferGetBaseAddress(imgBuf);
        
        // get image properties
        int w = (int)CVPixelBufferGetWidth(imgBuf);
        int h = (int)CVPixelBufferGetHeight(imgBuf);
        
        // create the cv mat
        cv::Mat mat(h, w, CV_8UC4, imgBufAddr, 0);
    //    //转换为灰度图像
    //    cv::Mat edges;
    //    cv::cvtColor(mat, edges, CV_BGR2GRAY);
        
        //旋转90度
        cv::Mat transMat;
        cv::transpose(mat, transMat);
        
        //翻转,1是x方向,0是y方向,-1位Both
        cv::Mat flipMat;
        cv::flip(transMat, flipMat, 1);
        
        CVPixelBufferUnlockBaseAddress(imgBuf, 0);
        
        return flipMat;
    }
    
    #pragma mark - 将CMSampleBufferRef转为UIImage
    + (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
    {
        // Get a CMSampleBuffer's Core Video image buffer for the media data
        CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
        // Lock the base address of the pixel buffer
        CVPixelBufferLockBaseAddress(imageBuffer, 0);
        
        // Get the number of bytes per row for the pixel buffer
        void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
        
        // Get the number of bytes per row for the pixel buffer
        size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
        // Get the pixel buffer width and height
        size_t width = CVPixelBufferGetWidth(imageBuffer);
        size_t height = CVPixelBufferGetHeight(imageBuffer);
        
        // Create a device-dependent RGB color space
        CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
        
        // Create a bitmap graphics context with the sample buffer data
        CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
                                                     bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
        
        //先获取imgBuffer
        CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
        CIContext *temporaryContext = [CIContext contextWithOptions:nil];
        CGImageRef videoImage = [temporaryContext
                                 createCGImage:ciImage
                                 fromRect:CGRectMake(0, 0,
                                                     CVPixelBufferGetWidth(imageBuffer),
                                                     CVPixelBufferGetHeight(imageBuffer))];
        //再旋转90度
        CGAffineTransform transform = CGAffineTransformIdentity;
        transform = CGAffineTransformTranslate(transform, 0, height);
        transform = CGAffineTransformRotate(transform, -M_PI_2);
        CGContextConcatCTM(context, transform);
        CGContextDrawImage(context, CGRectMake(0,0,height,width), videoImage);
        CGImageRelease(videoImage);
        
        // Create a Quartz image from the pixel data in the bitmap graphics context
        CGImageRef quartzImage = CGBitmapContextCreateImage(context);
        
        // Unlock the pixel buffer
        CVPixelBufferUnlockBaseAddress(imageBuffer,0);
        
        // Free up the context and color space
        CGContextRelease(context);
        CGColorSpaceRelease(colorSpace);
        
        // Create an image object from the Quartz image
        UIImage *image = [UIImage imageWithCGImage:quartzImage];
        
        // Release the Quartz image
        CGImageRelease(quartzImage);
        
        return (image);
    }
    
    // 局部自适应快速积分二值化方法 https://blog.csdn.net/realizetheworld/article/details/46971143
    +(cv::Mat)convToBinary:(cv::Mat) src{
        cv::Mat dst;
        cvtColor(src,dst,CV_BGR2GRAY);
        int x1, y1, x2, y2;
        int count=0;
        long long sum=0;
        int S=src.rows>>3;  //划分区域的大小S*S
        int T=15;         /*百分比,用来最后与阈值的比较。原文:If the value of the current pixel is t percent less than this average
                           then it is set to black, otherwise it is set to white.*/
        int W=dst.cols;
        int H=dst.rows;
        long long **Argv;
        Argv=new long long*[dst.rows];
        for(int ii=0;ii<dst.rows;ii++)
        {
            Argv[ii]=new long long[dst.cols];
        }
        
        for(int i=0;i<W;i++)
        {
            sum=0;
            for(int j=0;j<H;j++)
            {
                sum+=dst.at<uchar>(j,i);
                if(i==0)
                    Argv[j][i]=sum;
                else
                    Argv[j][i]=Argv[j][i-1]+sum;
            }
        }
        
        for(int i=0;i<W;i++)
        {
            for(int j=0;j<H;j++)
            {
                x1=i-S/2;
                x2=i+S/2;
                y1=j-S/2;
                y2=j+S/2;
                if(x1<0)
                    x1=0;
                if(x2>=W)
                    x2=W-1;
                if(y1<0)
                    y1=0;
                if(y2>=H)
                    y2=H-1;
                count=(x2-x1)*(y2-y1);
                sum=Argv[y2][x2]-Argv[y1][x2]-Argv[y2][x1]+Argv[y1][x1];
                
                
                if((long long)(dst.at<uchar>(j,i)*count)<(long long)sum*(100-T)/100)
                    dst.at<uchar>(j,i)=0;
                else
                    dst.at<uchar>(j,i)=255;
            }
        }
        for (int i = 0 ; i < dst.rows; ++i)
        {
            delete [] Argv[i];
        }
        delete [] Argv;
        return dst;
    }
    
    • 具体实现步骤:
    • 原图片转化为灰度矩阵;
    • 获取视频帧,处理成灰度矩阵;
    • 图像金字塔分级放大缩小匹配,最大0.8相机图像,最小0.3tep图像;
    • 对比两个图像是否有相同区域;
    • 保存当前模版矩阵匹配的位置cv::Point
    //将图片转换为灰度的矩阵
    -(cv::Mat)initTemplateImage:(NSString *)imgName{
        UIImage *templateImage = [UIImage imageNamed:imgName];
        cv::Mat tempMat;
        UIImageToMat(templateImage, tempMat);
        cv::cvtColor(tempMat, tempMat, CV_BGR2GRAY);
        return tempMat;
    }
    
    #pragma mark - 获取视频帧,处理视频
    - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
        [NSThread sleepForTimeInterval:0.5];
    
        cv::Mat imgMat;
        imgMat = [OpenCVManager bufferToMat:sampleBuffer];
        //判断是否为空,否则返回
        if (imgMat.empty() || self.templateMat.empty()) {
            return;
        }
        
        //转换为灰度图像
        cv::cvtColor(imgMat, imgMat, CV_BGR2GRAY);
        UIImage *tempImg = MatToUIImage(imgMat);
        
        //获取标记的矩形
        NSArray *rectArr = [self compareByLevel:6 CameraInput:imgMat];
        //转换为图片
        UIImage *rectImg = [OpenCVManager imageWithColor:[UIColor redColor] size:tempImg.size rectArray:rectArr];
        
        CGImageRef cgImage = rectImg.CGImage;
        
        //在异步线程中,将任务同步添加至主线程,不会造成死锁
        dispatch_sync(dispatch_get_main_queue(), ^{
            if (cgImage) {
                self.tagLayer.contents = (__bridge id _Nullable)cgImage;
            }
        });
    }
    
    
    //图像金字塔分级放大缩小匹配,最大0.8*相机图像,最小0.3*tep图像
    -(NSArray *)compareByLevel:(int)level CameraInput:(cv::Mat) inputMat{
        //相机输入尺寸
        int inputRows = inputMat.rows;
        int inputCols = inputMat.cols;
        
        //模板的原始尺寸
        int tRows = self.templateMat.rows;
        int tCols = self.templateMat.cols;
        
        NSMutableArray *marr = [NSMutableArray array];
        
        for (int i = 0; i < level; i++) {
            //取循环次数中间值
            int mid = level*0.5;
            //目标尺寸
            cv::Size dstSize;
            if (i<mid) {
                //如果是前半个循环,先缩小处理
                dstSize = cv::Size(tCols*(1-i*0.2),tRows*(1-i*0.2));
            }else{
                //然后再放大处理比较
                int upCols = tCols*(1+i*0.2);
                int upRows = tRows*(1+i*0.2);
                //如果超限会崩,则做判断处理
                if (upCols>=inputCols || upRows>=inputRows) {
                    upCols = tCols;
                    upRows = tRows;
                }
                dstSize = cv::Size(upCols,upRows);
            }
            //重置尺寸后的tmp图像
            cv::Mat resizeMat;
            cv::resize(self.templateMat, resizeMat, dstSize);
            //然后比较是否相同
            BOOL cmpBool = [self compareInput:inputMat templateMat:resizeMat];
            
            if (cmpBool) {
                NSLog(@"匹配缩放级别level==%d",i);
                CGRect rectF = CGRectMake(currentLoc.x, currentLoc.y, dstSize.width, dstSize.height);
                NSValue *rValue = [NSValue valueWithCGRect:rectF];
                [marr addObject:rValue];
                break;
            }
        }
        return marr;
    }
    
    /**
     对比两个图像是否有相同区域
     
     @return 有为Yes
     */
    -(BOOL)compareInput:(cv::Mat) inputMat templateMat:(cv::Mat)tmpMat{
        int result_rows = inputMat.rows - tmpMat.rows + 1;
        int result_cols = inputMat.cols - tmpMat.cols + 1;
        
        cv::Mat resultMat = cv::Mat(result_cols,result_rows,CV_32FC1);
        cv::matchTemplate(inputMat, tmpMat, resultMat, cv::TM_CCOEFF_NORMED);
        
        double minVal, maxVal;
        cv::Point minLoc, maxLoc, matchLoc;
        cv::minMaxLoc( resultMat, &minVal, &maxVal, &minLoc, &maxLoc, cv::Mat());
        //    matchLoc = maxLoc;
        //    NSLog(@"min==%f,max==%f",minVal,maxVal);
        dispatch_async(dispatch_get_main_queue(), ^{
            self.similarLevelLabel.text = [NSString stringWithFormat:@"相似度:%.2f",maxVal];
        });
        
        if (maxVal > 0.7) {
            //有相似位置,返回相似位置的第一个点
            currentLoc = maxLoc;
            return YES;
        }else{
            return NO;
        }
    }
    

    Vision+CoreML模块:

    CoreMedia+Vision+CoreML实现人体肢体关节检测,实现步骤如下:
    • Core ML模型导入到项目中(Core ML模型的产生通过其他机器学习工具训练然后转换成Core ML格式);
    • 选择该文件,Xcode会生成了模型类的输入输出类及一个主类,主类包括model属性和两个prediction方法;
    • Vision 框架会负责把我们熟悉的图片格式转换成GoogLeNetPlacesInput类中使用到的CVPixelBuffer 类型的 sceneImage。
    • Vision 框架还会把 GoogLeNetPlacesOutput 属性转换为自己的 results 类型,并管理对 prediction方法的调用,所以在所有生成的代码中,我们只会使用 model 属性。
    • 加载模型:
        // 从生成的类中加载 ML 模型
        guard let model = try? VNCoreMLModel(for: GoogLeNetPlaces().model) else {
          fatalError("can't load Places ML model")
        }
    
    • 创建请求:
    // 创建一个带有 completion handler 的 Vision 请求
    let request = VNCoreMLRequest(model: model) { [weak self] request, error in
      guard let results = request.results as? [VNClassificationObservation],
        let topResult = results.first else {
          fatalError("unexpected result type from VNCoreMLRequest")
      }
    
      // 在主线程上更新 UI
      let article = (self?.vowels.contains(topResult.identifier.first!))! ? "an" : "a"
      DispatchQueue.main.async { [weak self] in
        self?.answerLabel.text = "\(Int(topResult.confidence * 100))% it's \(article) \(topResult.identifier)"
      }
    }
    
    或者将请求写在visionModel的set方法里,如下:
        var visionModel: VNCoreMLModel! {
            didSet {
                request = VNCoreMLRequest(model: visionModel, completionHandler: visionRequestDidComplete)
                request.imageCropAndScaleOption = .scaleFill
            }
        }
    
    • VNCoreMLRequest.results:
      说明:当CoreML模型是分类器,而不是预测器或图像处理器时,Vision框架返回的是VNClassificationObservation对象数组。
    VideoCapture.swift:摄像头开启及前后切换;
    JointViewController.swift:加载model_cpm,请求分类;
    

    网络模块设计

    使用了 AFNetworking 处理所有的网络请求, 使用 YYModel 做 JSON的解析和映射。

    编程风格

    推荐使用 AOP函数式 进行编程,使用 Extension 对原有功能进行扩展。


    打包上传appstore

    • 待续...

    相关文章

      网友评论

          本文标题:强脑-项目总结

          本文链接:https://www.haomeiwen.com/subject/zxcrfctx.html