美文网首页iOS UI与适配
iOS 三种画板实现方式

iOS 三种画板实现方式

作者: iOS_修心 | 来源:发表于2022-06-01 18:37 被阅读0次

UIBezierPath,Quartz2D,OpenGLES

1. UIBezierPath

1 UIBezierPath

  1. 使用UIBezierPath可以创建基于矢量的路径,此类是Core Graphics框架关于路径的封装。使用此类可以定义简单的形状,如椭圆、矩形或者有多个直线和曲线段组成的形状等。
  1. UIBezierPath是CGPathRef数据类型的封装。如果是基于矢量形状的路径,都用直线和曲线去创建。我们使用直线段去创建矩形和多边形,使用曲线去创建圆弧(arc)、圆或者其他复杂的曲线形状。

使用UIBezierPath画图步骤:

  1. 创建一个UIBezierPath对象
  2. 调用-moveToPoint:设置初始线段的起点
  3. 添加直线线或者曲线 addQuadCurveToPoint:
  4. 绘制drawRect: 对每个UIBezierPath进行绘制
1. 初始化 UIBezierPath 并添加属性
 self.beziPath = [[UIBezierPath alloc] init];

2. 画板的View里实现touchesBegan,touchesMoved,touchesEnded的代理方法
3. 每次操作都是对 UIBezierPath的绘制
 - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
    UITouch *touch = [touches anyObject];
    CGPoint currentPoint = [touch locationInView:self];
    // 创建一个UIBezierPath
    self.beziPath = [[DCBeizierPath alloc] init];
    self.beziPath.lineColor = self.lineColor;
    self.beziPath.isErase = self.isErase;
    self.beziPath.lineJoinStyle = kCGLineJoinRound;
    self.beziPath.lineCapStyle = kCGLineCapRound;
    // 设置起始点
    [self.beziPath moveToPoint:currentPoint];
    // 将path添加到数组
    [self.beziPathArrM addObject:self.beziPath];
}

- (void)touchesMoved:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event{
    UITouch *touch = [touches anyObject];
    // 获取移动的点
    CGPoint currentPoint = [touch locationInView:self];
    CGPoint previousPoint = [touch previousLocationInView:self];
    CGPoint midP = midPoint(previousPoint,currentPoint);
    //1.将点添加到path里面,2.进行绘制
    [self.beziPath addQuadCurveToPoint:currentPoint controlPoint:midP];
    [self setNeedsDisplay];
}
- (void)touchesEnded:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event{
    UITouch *touch = [touches anyObject];
    // 获取结束点
    CGPoint currentPoint = [touch locationInView:self];
    CGPoint previousPoint = [touch previousLocationInView:self];
    CGPoint midP = midPoint(previousPoint,currentPoint);
    //1.将点添加到path里面,2.进行绘制
    [self.beziPath addQuadCurveToPoint:currentPoint controlPoint:midP];
    [self setNeedsDisplay];
}
// 计算中间点
CGPoint midPoint(CGPoint p1, CGPoint p2){
    return CGPointMake((p1.x + p2.x) * 0.5, (p1.y + p2.y) * 0.5);
}
4. 进行绘制
 - (void)drawRect:(CGRect)rect
{
    // 绘制每一条path
    if(self.beziPathArrM.count){
        for (DCBeizierPath *path  in self.beziPathArrM) {
            
            if (path.isErase) {
                // 橡皮擦
                [[UIColor clearColor] setStroke];
                path.lineWidth = kEraseLineWidth;
                [path strokeWithBlendMode:kCGBlendModeCopy alpha:1.0];
            } else {
                // 画线
                [path.lineColor setStroke];
                path.lineWidth = kLineWidth;
                [path strokeWithBlendMode:kCGBlendModeNormal alpha:1.0];
            }
            [path stroke];
        }
    }
    
    [super drawRect:rect];
}

总结

  • 优点
    1.这种实现方式最简单 而且也是直接调用oc的API实现,方法实现较为简单。
    2.用已知存储的点 添加路径 再绘制,速度很快。

  • 缺点
    1.如果需要保持每条你画的线都在,你需要保存每一条绘画路径。
    2.每次在绘画新添加的绘画线条的时候,都要把这条线段之前所有的线段在重绘一次,浪费系统性能。
    3.如果你不在乎这点性能的浪费,那么还有问题,当你越画线段越多的时候 屏幕识别点的距离会越来越大,并且明显能感觉到绘画速度变慢 逐渐能看到之前线段绘画的轨迹。

  • 应用场景
    1.一次性画一些简单的线段,并且不做修改的情况下可以使用。
    2.UI上需要做一些效果的简单线段可以使用。
    3.需要频繁修改和绘画的情况下,不建议使用。

2. Quartz2D

在画线的时候,方法的内部默认创建一个path。它把路径都放到了path里面去。
1.创建路径 CGMutablePathRef 调用该方法相当于创建了一个路径,这个路径用来保存绘图信息。 > 2.把绘图信息添加到路径里边。 以前的方法是点的位置添加到ctx(图形上下文信息)中,ctx 默认会在内 部创建一个path用来保存绘图信息。在图形上下文中有一块存储空间专门用来存储绘图信息,其实这块空间就是CGMutablePathRef。
3.把路径添加到上下文中。

1.画笔的初始化设置
-(void)setup{
    self.multipleTouchEnabled = YES;
    self.lineWidth = 5;
    self.lineColor =[UIColor blackColor];
}

2. touches代理方法
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
    UITouch *touch = [touches anyObject];
    
    self.previousPoint1 = [touch locationInView:self];
    self.previousPoint2 = [touch locationInView:self];
    self.currentPoint = [touch locationInView:self];
    
    CGPoint mid1    = midPoint1(self.previousPoint1, self.previousPoint2);
    CGPoint mid2    = midPoint1(self.currentPoint, self.previousPoint1);
   
    CGMutablePathRef path = CGPathCreateMutable();
    CGPathMoveToPoint(path, NULL, mid1.x, mid1.y);
    CGPathAddQuadCurveToPoint(path, NULL, self.previousPoint1.x, self.previousPoint1.y, mid2.x, mid2.y);
    
    CGRect bounds = CGPathGetBoundingBox(path);
    CGPathRelease(path);
    [self setNeedsDisplayInRect:bounds];

}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
    UITouch *touch  = [touches anyObject];
    
    self.previousPoint2  = self.previousPoint1;
    self.previousPoint1  = [touch previousLocationInView:self];
    self.currentPoint    = [touch locationInView:self];
    
    CGPoint mid1    = midPoint1(self.previousPoint1, self.previousPoint2);
    CGPoint mid2    = midPoint1(self.currentPoint, self.previousPoint1);
    
    CGMutablePathRef path = CGPathCreateMutable();
    CGPathMoveToPoint(path, NULL, mid1.x, mid1.x);
    CGPathAddQuadCurveToPoint(path, NULL, self.previousPoint1.x, self.previousPoint1.y, mid2.x, mid2.y);
    
    CGRect bounds = CGPathGetBoundingBox(path);
    CGPathRelease(path);
    [self setNeedsDisplayInRect:bounds];
  
}

-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{
    UITouch *touch  = [touches anyObject];
 
    self.previousPoint2  = self.previousPoint1;
    self.previousPoint1  = [touch previousLocationInView:self];
    self.currentPoint    = [touch locationInView:self];
    
    CGPoint mid1    = midPoint1(self.previousPoint1, self.previousPoint2);
    CGPoint mid2    = midPoint1(self.currentPoint, self.previousPoint1);
    
    CGMutablePathRef path = CGPathCreateMutable();
    CGPathMoveToPoint(path, NULL, mid1.x, mid1.y);
    CGPathAddQuadCurveToPoint(path, NULL, self.previousPoint1.x, self.previousPoint1.y, mid2.x, mid2.y);
    
    //绘画
    CGRect bounds = CGPathGetBoundingBox(path);
    CGPathRelease(path);
    [self setNeedsDisplayInRect:bounds];
}
3. 绘制
- (void)drawRect:(CGRect)rect{
    CGPoint mid1 = midPoint1(self.previousPoint1, self.previousPoint2);
    CGPoint mid2 = midPoint1(self.currentPoint, self.previousPoint1);
    
    self.context = UIGraphicsGetCurrentContext();
    CGContextMoveToPoint(self.context, mid1.x, mid1.y);
    // 添加画点
    CGContextAddQuadCurveToPoint(self.context, self.previousPoint1.x, self.previousPoint1.y, mid2.x, mid2.y);
    // 设置圆角
    CGContextSetLineCap(self.context, kCGLineCapRound);
    // 设置画笔颜色
    CGContextSetLineWidth(self.context, self.isErase? kEraseLineWidth:kLineWidth);
    
    CGContextSetStrokeColorWithColor(self.context, self.isErase?[UIColor clearColor].CGColor:self.lineColor.CGColor);
    
    CGContextSetLineJoin(self.context, kCGLineJoinRound);
    // 根据是否是橡皮擦设置设置画笔模式
    CGContextSetBlendMode(self.context, self.isErase ? kCGBlendModeDestinationIn:kCGBlendModeNormal);
    CGContextStrokePath(self.context);
    [super drawRect:rect];
}

总结

  • 优点
    1.绘画方法较为底层实现效率更高更快。
    2.每次绘画都很流畅 不会有延迟感 不会重绘已经画好的绘画路径。
    3.线段更加圆润

  • 缺点
    1.如果有已知点集合,重绘所有点路径 会消耗很长时间才能画完。
    2.如果App消耗性能过多的话<我们的App起着一个视频会话,
    一个通信会话还有一些很多控件的交互>,在Pad3上绘画 会有断点
    <Pad2,mini2 3,Air都没有这个问题<iPhone还没测试过4s和5>>,
    原因可能在于:Pad3是Retina屏幕 分辨率增长一倍
    但是Pad3的CPU GPU比Pad2却只增长了50%左右,
    导致不能连续识别到屏幕的触点,从而导致出现断点。

  • 使用场景
    1.App不太消耗性能的情况下且不需要重绘的情况下可以使用。
    2.只在一个页面绘画且绘画后不需要重绘的。
    3.不care这点时间消耗的。

三、OpenGLES

1.需要添加OpenGLES.framework系统库。并导入头文件

#import <OpenGLES/EAGL.h>
#import <OpenGLES/ES2/gl.h>
#import <OpenGLES/ES2/glext.h>
#import <GLKit/GLKit.h>

2.导入配置文件

#import "shaderUtil.h"
#import "fileUtil.h"
#import "debug.h"

3.这个半透明的图片很重要 相当于笔触 通过他的透明度来控制渲染笔画颜色的深浅



// 创建一个纹理的图像
- (textureInfo_t)textureFromName:(NSString *)name
{
    CGImageRef      brushImage;
    CGContextRef    brushContext;
    GLubyte         *brushData;
    size_t          width, height;
    GLuint          texId;
    textureInfo_t   texture;
    
    // First create a UIImage object from the data in a image file, and then extract the Core Graphics image
    brushImage = [UIImage imageNamed:name].CGImage;
    
    // Get the width and height of the image
    width = CGImageGetWidth(brushImage);
    height = CGImageGetHeight(brushImage);
    
    // Make sure the image exists
    if(brushImage) {
        // Allocate  memory needed for the bitmap context
        brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
        // Use  the bitmatp creation function provided by the Core Graphics framework.
        brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
        // After you create the context, you can draw the  image to the context.
        CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);
        // You don't need the context at this point, so you need to release it to avoid memory leaks.
        CGContextRelease(brushContext);
        // Use OpenGL ES to generate a name for the texture.
        // //创建渲染缓冲管线
        glGenTextures(1, &texId);
        // Bind the texture name.
        //绑定渲染缓冲管线
        glBindTexture(GL_TEXTURE_2D, texId);
        // Set the texture parameters to use a minifying filter and a linear filer (weighted average)
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        // Specify a 2D texture image, providing the a pointer to the image data in memory
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)width, (int)height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
        // Release  the image data; it's no longer needed
        free(brushData);
        
        texture.id = texId;
        texture.width = (int)width;
        texture.height = (int)height;
    }
    
    return texture;
}

// 初始化GL
- (BOOL)initGL
{
    // Generate IDs for a framebuffer object and a color renderbuffer
    ////创建帧缓冲管线
    glGenFramebuffers(1, &viewFramebuffer);
    //绑定渲染缓冲管线
    glGenRenderbuffers(1, &viewRenderbuffer);
    
    //绑定帧缓冲管线
    glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);
    
    
    //将渲染缓冲区附加到帧缓冲区上
    glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
    // This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer)
    // allowing us to draw into a buffer that will later be rendered to screen wherever the layer is (which corresponds with our view).
    [context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(id<EAGLDrawable>)self.layer];
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, viewRenderbuffer);
    
    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
    
    // For this sample, we do not need a depth buffer. If you do, this is how you can create one and attach it to the framebuffer:
    //    glGenRenderbuffers(1, &depthRenderbuffer);
    //    glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
    //    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, backingWidth, backingHeight);
    //    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);
    
    if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
    {
        NSLog(@"failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
        return NO;
    }
    
    //创建显示区域
    glViewport(0, 0, backingWidth, backingHeight);
    
    // Create a Vertex Buffer Object to hold our data
    glGenBuffers(1, &vboId);
    
    // Load the brush texture
    // 设置笔头
    brushTexture = [self textureFromName:@"Particle"];
    
    // Load shaders
    [self setupShaders];
    
    // Enable blending and set a blending function appropriate for premultiplied alpha pixel data
    glEnable(GL_BLEND);
    glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
    
    return YES;
}


- (BOOL)resizeFromLayer:(CAEAGLLayer *)layer
{
   // Allocate color buffer backing based on the current layer size
   glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
   [context renderbufferStorage:GL_RENDERBUFFER fromDrawable:layer];
   glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
   glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
   
   // For this sample, we do not need a depth buffer. If you do, this is how you can allocate depth buffer backing:
   //    glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
   //    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, backingWidth, backingHeight);
   //    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);
   
   if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
   {
       NSLog(@"Failed to make complete framebuffer objectz %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
       return NO;
   }
   
   // Update projection matrix
   GLKMatrix4 projectionMatrix = GLKMatrix4MakeOrtho(0, backingWidth, 0, backingHeight, -1, 1);
   GLKMatrix4 modelViewMatrix = GLKMatrix4Identity; // this sample uses a constant identity modelView matrix
   GLKMatrix4 MVPMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);
   
   glUseProgram(program[PROGRAM_POINT].id);
   glUniformMatrix4fv(program[PROGRAM_POINT].uniform[UNIFORM_MVP], 1, GL_FALSE, MVPMatrix.m);
   
   // Update viewport
   glViewport(0, 0, backingWidth, backingHeight);
   
   return YES;
}

- (void)setupShaders
{
   for (int i = 0; i < NUM_PROGRAMS; i++)
   {
       char *vsrc = readFile(pathForResource(program[i].vert));
       char *fsrc = readFile(pathForResource(program[i].frag));
       GLsizei attribCt = 0;
       GLchar *attribUsed[NUM_ATTRIBS];
       GLint attrib[NUM_ATTRIBS];
       GLchar *attribName[NUM_ATTRIBS] = {
           "inVertex",
       };
       const GLchar *uniformName[NUM_UNIFORMS] = {
           "MVP", "pointSize", "vertexColor", "texture",
       };
       
       // auto-assign known attribs
       for (int j = 0; j < NUM_ATTRIBS; j++)
       {
           if (strstr(vsrc, attribName[j]))
           {
               attrib[attribCt] = j;
               attribUsed[attribCt++] = attribName[j];
           }
       }
       
       glueCreateProgram(vsrc, fsrc,
                         attribCt, (const GLchar **)&attribUsed[0], attrib,
                         NUM_UNIFORMS, &uniformName[0], program[i].uniform,
                         &program[i].id);
       free(vsrc);
       free(fsrc);
       
       // Set constant/initalize uniforms
       if (i == PROGRAM_POINT)
       {
           glUseProgram(program[PROGRAM_POINT].id);
           
           // the brush texture will be bound to texture unit 0
           glUniform1i(program[PROGRAM_POINT].uniform[UNIFORM_TEXTURE], 0);
           
           // viewing matrices
           GLKMatrix4 projectionMatrix = GLKMatrix4MakeOrtho(0, backingWidth, 0, backingHeight, -1, 1);
           GLKMatrix4 modelViewMatrix = GLKMatrix4Identity; // this sample uses a constant identity modelView matrix
           GLKMatrix4 MVPMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);
           
           glUniformMatrix4fv(program[PROGRAM_POINT].uniform[UNIFORM_MVP], 1, GL_FALSE, MVPMatrix.m);
           
           // point size
           glUniform1f(program[PROGRAM_POINT].uniform[UNIFORM_POINT_SIZE], brushTexture.width / kBrushScale);
           
           // initialize brush color
           glUniform4fv(program[PROGRAM_POINT].uniform[UNIFORM_VERTEX_COLOR], 1, brushColor);
       }
   }
   
   glError();
}


// 根据两点画线的方法
- (void)renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end
{
    //     NSLog(@"drawLineWithPoints--%@--%@",NSStringFromCGPoint(start),NSStringFromCGPoint(end));
    static GLfloat*     vertexBuffer = NULL;
    static NSUInteger   vertexMax = 64;
    NSUInteger          vertexCount = 0,
    count,
    i;
    
    [EAGLContext setCurrentContext:context];
    glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);
    
    // Convert locations from Points to Pixels
    CGFloat scale = self.contentScaleFactor;
    
    start.x *= scale;
    start.y *= scale;
    end.x *= scale;
    end.y *= scale;
    
    // Allocate vertex array buffer
    if(vertexBuffer == NULL)
        vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
    
    // Add points to the buffer so there are drawing points every X pixels
    count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1);
    for(i = 0; i < count; ++i) {
        if(vertexCount == vertexMax) {
            vertexMax = 2 * vertexMax;
            vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat));
        }
        
        vertexBuffer[2 * vertexCount + 0] = start.x + (end.x - start.x) * ((GLfloat)i / (GLfloat)count);
        vertexBuffer[2 * vertexCount + 1] = start.y + (end.y - start.y) * ((GLfloat)i / (GLfloat)count);
        vertexCount += 1;
    }
    
    // Load data to the Vertex Buffer Object
    glBindBuffer(GL_ARRAY_BUFFER, vboId);
    glBufferData(GL_ARRAY_BUFFER, vertexCount*2*sizeof(GLfloat), vertexBuffer, GL_STATIC_DRAW);
    
    
    glEnableVertexAttribArray(ATTRIB_VERTEX);
    glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, GL_FALSE, 0, 0);
    
    // Draw
    glUseProgram(program[PROGRAM_POINT].id);
    
    // 画线
    glDrawArrays(GL_POINTS, 0, (int)vertexCount);
    
    // Display the buffer
    glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
    [context presentRenderbuffer:GL_RENDERBUFFER];
}


// 清楚
- (void)clearDrawImageView
{
    [EAGLContext setCurrentContext:context];
    
    // Clear the buffer
    glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);
    glClearColor(0.0, 0.0, 0.0, 0.0);
    glClear(GL_COLOR_BUFFER_BIT);
    
    // Display the buffer
    glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
    [context presentRenderbuffer:GL_RENDERBUFFER];
}

- (void)setBrushColorWithRed:(CGFloat)red green:(CGFloat)green blue:(CGFloat)blue alpha:(CGFloat)alpha
{
    // Update the brush color
    brushColor[0] = red ;
    brushColor[1] = green ;
    brushColor[2] = blue ;
    brushColor[3] = alpha;
    
    if (initialized) {
        glUseProgram(program[PROGRAM_POINT].id);
        // 设置画笔颜色
        glUniform4fv(program[PROGRAM_POINT].uniform[UNIFORM_VERTEX_COLOR], 1, brushColor);
    }
}



// Releases resources when they are not longer needed.
- (void)dealloc
{
    // Destroy framebuffers and renderbuffers
    if (viewFramebuffer) {
        glDeleteFramebuffers(1, &viewFramebuffer);
        viewFramebuffer = 0;
    }
    if (viewRenderbuffer) {
        glDeleteRenderbuffers(1, &viewRenderbuffer);
        viewRenderbuffer = 0;
    }
    if (depthRenderbuffer)
    {
        glDeleteRenderbuffers(1, &depthRenderbuffer);
        depthRenderbuffer = 0;
    }
    // texture
    if (brushTexture.id) {
        glDeleteTextures(1, &brushTexture.id);
        brushTexture.id = 0;
    }
    // vbo
    if (vboId) {
        glDeleteBuffers(1, &vboId);
        vboId = 0;
    }
    
    // tear down context
    if ([EAGLContext currentContext] == context)
        [EAGLContext setCurrentContext:nil];
}

总结

  • 优点
    1.很底层,绘画速度更快,直接通过硬件的渲染,解决了上一个在iPad3硬件下绘画会有断点的bug。
    2.性能更好。

  • 缺点
    1.暂时我还没找到 画弧线的方法。
    2.更底层,API可读性太差 没有注释 根本看不懂 有注释的也没看懂几个。
    3.通过已知点,重绘的速度也慢,好在于相对于上一种方法的慢他是可以看到绘画轨迹的,可能适用于一些特殊的需求。

  • 个人集成后遇到的坑
    1.橡皮擦和画笔状态切换的时候回造成状态失效,原因不详...解决方案:每次touchBegain的时候都再次设置一次。
    2.橡皮擦状态下,擦除画笔的时候会有一个小圆点一直跟随笔迹,原因不详...解决方案同上。

  • 应用场景
    1.一次性画一些简单的线段,并且不做修改的情况下可以使用。
    2.UI上需要做一些效果的简单线段可以使用。
    3.需要频繁修改和绘画的情况下,不建议使用。

相关文章

网友评论

    本文标题:iOS 三种画板实现方式

    本文链接:https://www.haomeiwen.com/subject/sqpxmrtx.html