iOS 边框识别方式目前知道的大致有三种:
OpenCV
Wescan
CIDetector
OpenCV 不太会搞,很有点复杂。 OpenCV 的色彩不是 RGB 的,而是 HSV 的。好在可以做到特定的颜色识别。如果对 C++语言比较熟悉可以自行研究。
Wescan 是我在github 上找到的一个三方框架,但是能够定制 view,下面是他的地址:
https://github.com/WeTransfer/WeScan
接下来我们要重点介绍下第三个方法
CIDetector 是 iOS 原生的识别器,先看下他的初始化方法:
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeRectangle context:nil options:@{CIDetectorAccuracy:CIDetectorAccuracyHigh}];
他的初始化类型有以下三种
/* Specifies a detector type for face recognition. */
CORE_IMAGE_EXPORT NSString* const CIDetectorTypeFace NS_AVAILABLE(10_7, 5_0);
/* Specifies a detector type for rectangle detection. */
CORE_IMAGE_EXPORT NSString* const CIDetectorTypeRectangle NS_AVAILABLE(10_10, 8_0);
/* Specifies a detector type for barcode detection. */
CORE_IMAGE_EXPORT NSString* const CIDetectorTypeQRCode NS_AVAILABLE(10_10, 8_0);
也就是说他可以识别人脸、矩形、二维码。
接下来我们看下它如何使用
//传入图片,得到边框数组
NSArray *rectangles = [detector featuresInImage:image]
//数组中是CIFeature对象,由于类型是CIDetectorAccuracyHigh,可以直接声明成CIRectangleFeature对象。比如我们现在需要拿到第一个对象
CIRectangleFeature *firstFeature = rectangles[0];
CIRectangleFeature类是这样的
@interface CIRectangleFeature : CIFeature
{
CGRect bounds;
CGPoint topLeft;
CGPoint topRight;
CGPoint bottomLeft;
CGPoint bottomRight;
}
@property (readonly) CGRect bounds;
@property (readonly) CGPoint topLeft;
@property (readonly) CGPoint topRight;
@property (readonly) CGPoint bottomLeft;
@property (readonly) CGPoint bottomRight;
@end
可以看到它有bounds、topLeft、topRight、bottomLeft、bottomRight这几个属性。
图片来源于网络注意:Quartz 2D坐标系和 UIKit 坐标系是不一样的。而CIFeature就是用的Quartz 2D坐标系,我们平时布局用的却是UIKit 坐标系
坐标转换
//Quartz 2D坐标转 UIKit 坐标
- (void)transfromRealRect
{
//得到照片大小
CGRect imageRect = CGRectMake(0, 0, self.image.size.width, self.image.size.height);
//UIImageView 的大小
CGRect rect = CGRectMake(0, 0, kScreenWidth, kScreenHeight);
//得到缩放比例
CGFloat deltaX = CGRectGetWidth(rect)/CGRectGetWidth(imageRect);
CGFloat deltaY = CGRectGetHeight(rect)/CGRectGetHeight(imageRect);
//变换
CGAffineTransform transform = CGAffineTransformMakeTranslation(0.f, CGRectGetHeight(rect));
transform = CGAffineTransformScale(transform, 1, -1);
transform = CGAffineTransformScale(transform, deltaX, deltaY);
//变换后的点
_topLeft = CGPointApplyAffineTransform(_topLeft, transform);
_topRight = CGPointApplyAffineTransform(_topRight, transform);
_bottomRight = CGPointApplyAffineTransform(_bottomRight, transform);
_bottomLeft = CGPointApplyAffineTransform(_bottomLeft, transform);
}
将变换后的点绘制到 UIImageView的Layer 上。
此方法学习自此 demo:https://github.com/madaoCN/MADRectDetect
补充,CIDetector也可做二维码扫描
//识别二维码
-(void)scanQRCodeFunction:(UIImage *) image{
//初始化探测器,类型为二维码
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeQRCode context:nil options:@{CIDetectorAccuracy:CIDetectorAccuracyHigh}];
//UIImage 转 CIImage
CIImage *ciimg = [[CIImage alloc]initWithImage:image];
//获取特征数组
NSArray *features = [detector featuresInImage:ciimg];
if (features.count > 0) {
//返回的特征本来是CIFeature,但由于是识别的二维码,直接声明成CIQRCodeFeature
CIQRCodeFeature *feature = features[0];
//获取识别到的文字
NSString *msg = feature.messageString;
}else{
[SVProgressHUD showInfoWithStatus:@"请确认是否对齐边框,或是否使用的是专用错题本"];
}
}
人脸识别没用过,估计用法大致相同。即声明时传类型为CIDetectorTypeFace,UIImage 转 CIImage,获取特征数组,解析特征的时候直接声明成CIFaceFeature。
这是CIFaceFeature的结构
@interface CIFaceFeature : CIFeature
{
CGRect bounds;
BOOL hasLeftEyePosition;
CGPoint leftEyePosition;
BOOL hasRightEyePosition;
CGPoint rightEyePosition;
BOOL hasMouthPosition;
CGPoint mouthPosition;
BOOL hasTrackingID;
int trackingID;
BOOL hasTrackingFrameCount;
int trackingFrameCount;
BOOL hasFaceAngle;
float faceAngle;
BOOL hasSmile;
BOOL leftEyeClosed;
BOOL rightEyeClosed;
}
有兴趣的自己研究下,完。
网友评论