这是一篇OpenGL ES的实战,紧接 入门教程3
学了OpenGL ES一段时间,用这个应用来练练手。
OpenGL ES系列教程在这里。
OpenGL ES系列教程的代码地址 - 你的star和fork是我的源动力,你的意见能让我走得更远。
效果展示
实战.gifdemo来自苹果官方,可以学习苹果的工程师如何应用OpenGL ES。
这次的内容包括,shader,CoreGraphics、手势识别、运动轨迹、模糊点效果;
shader
自定义enum,方便OC与shader之间的赋值,配合下面的自动assign功能,非常便捷。
enum {
PROGRAM_POINT,
NUM_PROGRAMS
};
enum {
UNIFORM_MVP,
UNIFORM_POINT_SIZE,
UNIFORM_VERTEX_COLOR,
UNIFORM_TEXTURE,
NUM_UNIFORMS
};
enum {
ATTRIB_VERTEX,
NUM_ATTRIBS
};
typedef struct {
char *vert, *frag;
GLint uniform[NUM_UNIFORMS];
GLuint id;
} programInfo_t;
programInfo_t program[NUM_PROGRAMS] = {
{ "point.vsh", "point.fsh" }, // PROGRAM_POINT
};
创建program的过程,用glBindAttribLocation()
和 glueGetUniformLocation ()
来绑定attribute和uniform变量。
/* Convenience wrapper that compiles, links, enumerates uniforms and attribs */
GLint glueCreateProgram(const GLchar *vertSource, const GLchar *fragSource,
GLsizei attribNameCt, const GLchar **attribNames,
const GLint *attribLocations,
GLsizei uniformNameCt, const GLchar **uniformNames,
GLint *uniformLocations,
GLuint *program)
{
GLuint vertShader = 0, fragShader = 0, prog = 0, status = 1, i;
prog = glCreateProgram();
status *= glueCompileShader(GL_VERTEX_SHADER, 1, &vertSource, &vertShader);
status *= glueCompileShader(GL_FRAGMENT_SHADER, 1, &fragSource, &fragShader);
glAttachShader(prog, vertShader);
glAttachShader(prog, fragShader);
for (i = 0; i < attribNameCt; i++)
{
if(strlen(attribNames[i]))
glBindAttribLocation(prog, attribLocations[i], attribNames[i]);
}
status *= glueLinkProgram(prog);
status *= glueValidateProgram(prog);
if (status)
{
for(i = 0; i < uniformNameCt; i++)
{
if(strlen(uniformNames[i]))
uniformLocations[i] = glueGetUniformLocation(prog, uniformNames[i]);
}
*program = prog;
}
if (vertShader)
glDeleteShader(vertShader);
if (fragShader)
glDeleteShader(fragShader);
glError();
return status;
}
shader的编译之前困扰过我很久,这个demo介绍了一种方法可以获取编译错误信息,非常的nice。
#define glError() { \
GLenum err = glGetError(); \
if (err != GL_NO_ERROR) { \
printf("glError: %04x caught at %s:%u\n", err, __FILE__, __LINE__); \
} \
}
CoreGraphics
自定义textureInfo_t结构体,在textureFromName()
用CoreGraphics把url对应的image data缓存到OpenGLES,并通过textureInfo_t返回信息。
// Texture
typedef struct {
GLuint id;
GLsizei width, height;
} textureInfo_t;
// Create a texture from an image
- (textureInfo_t)textureFromName:(NSString *)name
{
CGImageRef brushImage;
CGContextRef brushContext;
GLubyte *brushData;
size_t width, height;
GLuint texId;
textureInfo_t texture;
// First create a UIImage object from the data in a image file, and then extract the Core Graphics image
brushImage = [UIImage imageNamed:name].CGImage;
// Get the width and height of the image
width = CGImageGetWidth(brushImage);
height = CGImageGetHeight(brushImage);
// Make sure the image exists
if(brushImage) {
// Allocate memory needed for the bitmap context
brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &texId);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, texId);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)width, (int)height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
texture.id = texId;
texture.width = (int)width;
texture.height = (int)height;
}
return texture;
}
手势识别
这里的手势只有点击和滑动,通过记录touchesBegan,获取第一个点的位置,之后滑动的过程中touchesMoved获取到这次的位置和上次的位置,可以画出一道手指滑动的轨迹,通过renderLineFromPoint()
绘制。
// Handles the start of a touch
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
CGRect bounds = [self bounds];
UITouch* touch = [[event touchesForView:self] anyObject];
firstTouch = YES;
// Convert touch point from UIView referential to OpenGL one (upside-down flip)
location = [touch locationInView:self];
location.y = bounds.size.height - location.y;
}
// Handles the continuation of a touch.
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
CGRect bounds = [self bounds];
UITouch* touch = [[event touchesForView:self] anyObject];
// Convert touch point from UIView referential to OpenGL one (upside-down flip)
if (firstTouch) {
firstTouch = NO;
previousLocation = [touch previousLocationInView:self];
previousLocation.y = bounds.size.height - previousLocation.y;
} else {
location = [touch locationInView:self];
location.y = bounds.size.height - location.y;
previousLocation = [touch previousLocationInView:self];
previousLocation.y = bounds.size.height - previousLocation.y;
}
// Render the stroke
if (!lyArr) {
lyArr = [NSMutableArray array];
}
[lyArr addObject:[[LYPoint alloc] initWithCGPoint:previousLocation]];
[lyArr addObject:[[LYPoint alloc] initWithCGPoint:location]];
[self renderLineFromPoint:previousLocation toPoint:location];
}
// Handles the end of a touch event when the touch is a tap.
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
CGRect bounds = [self bounds];
UITouch* touch = [[event touchesForView:self] anyObject];
if (firstTouch) {
firstTouch = NO;
previousLocation = [touch previousLocationInView:self];
previousLocation.y = bounds.size.height - previousLocation.y;
[self renderLineFromPoint:previousLocation toPoint:location];
}
}
// Handles the end of a touch event.
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event
{
// If appropriate, add code necessary to save the state of the application.
// This application is not saving state.
NSLog(@"cancell");
}
运动轨迹
通过把起点到终点的轨迹分解成若干个点,分别来绘制每个点,从而达到线的效果。
count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1);
这行代码是核心思想,分出count个点。
然后通过glDrawArrays(GL_POINTS, 0, (int)vertexCount);
绘制。
// Drawings a line onscreen based on where the user touches
- (void)renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end
{
static GLfloat* vertexBuffer = NULL;
static NSUInteger vertexMax = 64;
NSUInteger vertexCount = 0,
count,
i;
[EAGLContext setCurrentContext:context];
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);
// Convert locations from Points to Pixels
CGFloat scale = self.contentScaleFactor;
start.x *= scale;
start.y *= scale;
end.x *= scale;
end.y *= scale;
// Allocate vertex array buffer
if(vertexBuffer == NULL)
vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
// Add points to the buffer so there are drawing points every X pixels
count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1);
for(i = 0; i < count; ++i) {
if(vertexCount == vertexMax) {
vertexMax = 2 * vertexMax;
vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat));
}
vertexBuffer[2 * vertexCount + 0] = start.x + (end.x - start.x) * ((GLfloat)i / (GLfloat)count);
vertexBuffer[2 * vertexCount + 1] = start.y + (end.y - start.y) * ((GLfloat)i / (GLfloat)count);
vertexCount += 1;
}
// Load data to the Vertex Buffer Object
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, vertexCount*2*sizeof(GLfloat), vertexBuffer, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, GL_FALSE, 0, 0);
// Draw
glUseProgram(program[PROGRAM_POINT].id);
glDrawArrays(GL_POINTS, 0, (int)vertexCount);
// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
模糊点的效果
点模糊的效果通过开启混合模式,并设置混合函数
// Enable blending and set a blending function appropriate for premultiplied alpha pixel data
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
注意color和texture2D的操作符是*,不是+。
uniform sampler2D texture;
varying lowp vec4 color;
void main()
{
gl_FragColor = color * texture2D(texture, gl_PointCoord);
}
最后
送上一张图
附上源码
思考题
- 如何改动开头的加油?
网友评论
當我的 print View 超過某長度(例如4000),他便會報 " failed to make complete framebuffer object 8cd6 "
需要設換至 2000左右的長度才可以使用,是什麼原因?
// Playback recorded path, which is "Shake Me"
NSString* path = [[NSBundle mainBundle] pathForResource:@"abc" ofType:@"string"];
NSString* str = [NSString stringWithContentsOfFile:path encoding:NSUTF8StringEncoding error:nil];
lyArr = [NSMutableArray array];
NSArray* jsonArr = [NSJSONSerialization JSONObjectWithData:[str dataUsingEncoding:NSUTF8StringEncoding] options:NSJSONReadingAllowFragments error:nil];
for (NSDictionary* dict in jsonArr) {
LYPoint* point = [LYPoint new];
point.mX = [dict objectForKey:@"mX"];
point.mY = [dict objectForKey:@"mY"];
[lyArr addObject:point];
}
问题1,记录touchMove的点是没错的。每一次touchBegin到touchEnd作为一个操作,这样你维持一个操作栈,每次进出栈即可(每次变化后根据栈里的点直接绘制)。
问题2,同上。