iOS 获取APP内截屏图片并做进一步处理

作者: dragonYao | 来源:发表于2017-02-08 16:15 被阅读1169次

    项目中难免会用到屏幕的截屏,截屏之后的图片并不能满足项目的需求,这个时候你就要对获取的屏幕图片做进一步的处理,例如:打上水印或者将logo拼接到截图上然后去分享等等。所以在这里就写一下项目中使用获取到的截屏拼接上logo然后分享图片的功能实现。 Demo地址

    • 第一步:现实在APP内监听用户的截屏行为 -->在AppDelegate的方法 application:didFinishLaunchingWithOptions: 中添加通知并实现通知的方法。代码如下:
     //用户截屏操作
        [[NSNotificationCenter defaultCenter] addObserver:self
                                                 selector:@selector(userDidTakeScreenshot:)
                                                     name:UIApplicationUserDidTakeScreenshotNotification object:nil];
    //监听方法实现
    - (void)userDidTakeScreenshot:(NSNotification *)notification
    {
        __weak typeof(self) weakSelf = self;
        //人为截屏, 模拟用户截屏行为, 获取所截图片 避免用户连续截屏
        for (UIView *view in self.window.subviews) {
            if ([view isKindOfClass:[CustomIOSAlertView class]]) {
                return;
            }
        }
        UIImage *image = [self imageWithScreenshot];
        _popAlertView = [[CustomIOSAlertView alloc] init];
        [_popAlertView setContainerView:[self createViews:image]];
        [_popAlertView setButtonTitles:[NSMutableArray arrayWithObjects:@"保存",@"分享", nil]];
        [_popAlertView setOnButtonTouchUpInside:^(CustomIOSAlertView *alertView, int buttonIndex) {
            if (buttonIndex ==0) {
                //在保存图片的时候可以打上自定义的水印
                NSLog(@"保存");
                UIImage *saveImg = [weakSelf waterMarkForImage:image withMarkName:@"markInfo"];
                [weakSelf saveImageAlbum:saveImg];
            }
            if (buttonIndex == 1) {
                NSLog(@"分享");
            }
            [alertView close];
        }];
        [_popAlertView setUseMotionEffects:YES];
        [_popAlertView show];
    }
    
    • 第二步要实现获取截屏并转化为图片,代码实现如下:
    //获取截屏
    - (UIImage *)imageWithScreenshot
    {
        NSData *imageData = [self dataWithScreenshotInPNGFormat];
        return [UIImage imageWithData:imageData];
    }
    //截屏操作
    - (NSData *)dataWithScreenshotInPNGFormat
    {
        CGSize imageSize = CGSizeZero;
        UIInterfaceOrientation orientation = [UIApplication sharedApplication].statusBarOrientation;
        if (UIInterfaceOrientationIsPortrait(orientation)) {
            imageSize = [UIScreen  mainScreen].bounds.size;
        }
        else {
            imageSize = CGSizeMake([UIScreen mainScreen].bounds.size.height,  [UIScreen mainScreen].bounds.size.width);
        }
    
        UIGraphicsBeginImageContextWithOptions(imageSize,  NO,  0);
        CGContextRef context = UIGraphicsGetCurrentContext();
    
        for (UIWindow *window in [[UIApplication sharedApplication] windows]) {
    
            CGContextSaveGState(context);
    
            CGContextTranslateCTM(context, window.center.x, window.center.y);
    
            CGContextConcatCTM(context, window.transform);
    
            CGContextTranslateCTM(context, -window.bounds.size.width * window.layer.anchorPoint.x, -window.bounds.size.height * window.layer.anchorPoint.y);
    
            if (orientation == UIInterfaceOrientationLandscapeLeft)
            {
                CGContextRotateCTM(context, M_PI_2);
                CGContextTranslateCTM(context, 0, -imageSize.width);
            }
            else if (orientation == UIInterfaceOrientationLandscapeRight)
            {
                CGContextRotateCTM(context, -M_PI_2);
                CGContextTranslateCTM(context, -imageSize.height, 0);
            }
            else if (orientation == UIInterfaceOrientationPortraitUpsideDown)
            {
                CGContextRotateCTM(context, M_PI);
                CGContextTranslateCTM(context, -imageSize.width, -imageSize.height);
            }
            if ([window respondsToSelector:@selector(drawViewHierarchyInRect:afterScreenUpdates:)])
            {
                [window  drawViewHierarchyInRect:window.bounds  afterScreenUpdates:YES];
            }
            else
            {
                [window.layer renderInContext:context];
            }
            CGContextRestoreGState(context);
        }
        UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
        UIGraphicsEndImageContext();
        return  UIImagePNGRepresentation(image);
    }
    
    
    • 第三步就是在获取的图片上实现打上水印 或者拼接logo或其他图片的处理。代码如下:
    //给截图打上logo或者水印标志
    - (UIImage *)waterMarkForImage:(UIImage *)shotImg
                      withMarkName:(NSString *)markName
    {
        UIImage *codeImg = [UIImage imageNamed:@"codeImage"];
        CGFloat width = shotImg.size.width;
        CGFloat height = shotImg.size.height;
        CGFloat codeImgRatio = 220/375.0f;//拼接图片的高/宽的比例 用来适配屏幕
        UIGraphicsBeginImageContext(CGSizeMake(width, width * codeImgRatio + height));
        [shotImg drawInRect:CGRectMake(0.0, 0.0, width, height)];
        [codeImg drawInRect:CGRectMake(0.0, height, width, width * codeImgRatio)];
        //打上水印
        NSDictionary *attribute = @{
                               NSFontAttributeName: [UIFont boldSystemFontOfSize:10],  //设置字体
                               NSForegroundColorAttributeName :[UIColor orangeColor]  //设置字体颜色
                               };
        [markName drawAtPoint:CGPointMake(0, height + width * codeImgRatio - 20) withAttributes:attribute];
        //得到最终的图
        UIImage *finalImg = UIGraphicsGetImageFromCurrentImageContext();
        UIGraphicsEndImageContext();
        return finalImg;
    }
    
    结尾:APP内部截图处理的部分就实现了,上面只是贴上了主要的功能代码,具体要怎么实现就要看具体的项目需求了。我遇到的只是截图之后拼接图片,然后去微信分享。这是本人的笔记,在学习的过程中记录一下,如感觉代码有不妥之处,还望指正🙏。 再次贴上 demo地址

    相关文章

      网友评论

        本文标题:iOS 获取APP内截屏图片并做进一步处理

        本文链接:https://www.haomeiwen.com/subject/cczvittx.html