美文网首页
GPUImage3(一)

GPUImage3(一)

作者: 熊啊熊啊熊 | 来源:发表于2019-12-04 15:38 被阅读0次

因为最新在学习Metal,所以就去看了一下GPUImage3,之前的2个版本都是基于openGL ES,到了GPUImage3就改成了基于Metal的实现。

我们现在就开始来了解一下GPUImage3框架的设计以及一些核心类的实现。

image.png

整体框架就分为4分部分 BaseInputsOutputsOperations

Base:工具类
Inputs:输入源
Outputs:输出源
Operations:滤镜

Base

常用数据类:ImageOrientationColor, Position, Size, Matrix, Timestamp, Texture
常用工具类:Pipeline, MetalRenderingDevice, ShaderUniformSettings

我们来说几个关键的类:

1.MetalRenderingDevice:

它里面使用了单例,维护着一些频繁使用的Metal对象

public let sharedMetalRenderingDevice = MetalRenderingDevice()

public class MetalRenderingDevice {
public let device: MTLDevice
public let commandQueue: MTLCommandQueue
public let shaderLibrary: MTLLibrary

...

init() {
        guard let device = MTLCreateSystemDefaultDevice() else {fatalError("Could not create Metal Device")}
        self.device = device
        
        guard let queue = self.device.makeCommandQueue() else {fatalError("Could not create command queue")}
        self.commandQueue = queue
        
        if #available(iOS 9, macOS 10.13, *) {
            self.metalPerformanceShadersAreSupported = MPSSupportsMTLDevice(device)
        } else {
            self.metalPerformanceShadersAreSupported = false
        }
        
        do {
            let frameworkBundle = Bundle(for: MetalRenderingDevice.self)
            let metalLibraryPath = frameworkBundle.path(forResource: "default", ofType: "metallib")!
            
            self.shaderLibrary = try device.makeLibrary(filepath:metalLibraryPath)
        } catch {
            fatalError("Could not load library")
        }
    }

}

MTLDevice : 能够进行数据并行计算的处理器。代表GPU。通过MTLCreateSystemDefaultDevice()获取到默认的GPU
MTLCommandQueue:由device对象创建,device.makeCommandQueue(),用于指令能够有序的传送到GPU
MTLLibrary:一个着色器功能的集合,包含了在程序运行时编译的着色器源代码。由device创建,device.makeDefaultLibrary()。获取默认的library对象

MTLDevice, MTLCommandQueue, MTLLibrary这些对象创建一次就够了,创建比较繁重,可重复使用

2.MetalRendering:

在这个文件里面有两个地方。
1.MTLCommandBuffer的扩展:

extension MTLCommandBuffer {
    /// 清屏
    func clear(with color: Color, outputTexture: Texture) {
        let renderPass = MTLRenderPassDescriptor()
        renderPass.colorAttachments[0].texture = outputTexture.texture
        renderPass.colorAttachments[0].clearColor = MTLClearColorMake(Double(color.redComponent), Double(color.greenComponent), Double(color.blueComponent), Double(color.alphaComponent))
        renderPass.colorAttachments[0].storeAction = .store
        renderPass.colorAttachments[0].loadAction = .clear
        
        print("Clear color: \(renderPass.colorAttachments[0].clearColor)")
        
        guard let renderEncoder = self.makeRenderCommandEncoder(descriptor: renderPass) else {
            fatalError("Could not create render encoder")
        }
//        renderEncoder.setRenderPipelineState(sharedMetalRenderingDevice.passthroughRenderState)

//        renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 0)

        renderEncoder.endEncoding()
    }
    
    //传入对应的配置项来完成渲染指令的提交
    func renderQuad(pipelineState:MTLRenderPipelineState, //渲染管线
                    uniformSettings:ShaderUniformSettings? = nil, //常量
                    inputTextures:[UInt:Texture], // 纹理数据
                    useNormalizedTextureCoordinates:Bool = true,
                    imageVertices:[Float] = standardImageVertices, //顶点坐标
                    outputTexture:Texture, // 目标纹理
                    outputOrientation:ImageOrientation = .portrait) {
        /// 生成顶点数据的buffer
        let vertexBuffer = sharedMetalRenderingDevice.device.makeBuffer(bytes: imageVertices,
                                                                        length: imageVertices.count * MemoryLayout<Float>.size,
                                                                        options: [])!
        vertexBuffer.label = "Vertices"
        
        
        let renderPass = MTLRenderPassDescriptor()
        renderPass.colorAttachments[0].texture = outputTexture.texture
        renderPass.colorAttachments[0].clearColor = MTLClearColorMake(1, 0, 0, 1)
        renderPass.colorAttachments[0].storeAction = .store
        renderPass.colorAttachments[0].loadAction = .clear
        
        guard let renderEncoder = self.makeRenderCommandEncoder(descriptor: renderPass) else {
            fatalError("Could not create render encoder")
        }
        renderEncoder.setFrontFacing(.counterClockwise) //逆时针
        renderEncoder.setRenderPipelineState(pipelineState) //设置渲染管线
        renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0) //设置顶点数据
        
        for textureIndex in 0..<inputTextures.count {
            let currentTexture = inputTextures[UInt(textureIndex)]!
            
            //纹理坐标数据
            let inputTextureCoordinates = currentTexture.textureCoordinates(for:outputOrientation, normalized:useNormalizedTextureCoordinates)
            let textureBuffer = sharedMetalRenderingDevice.device.makeBuffer(bytes: inputTextureCoordinates,
                                                                             length: inputTextureCoordinates.count * MemoryLayout<Float>.size,
                                                                             options: [])!
            textureBuffer.label = "Texture Coordinates"

            //传纹理坐标数据
            renderEncoder.setVertexBuffer(textureBuffer, offset: 0, index: 1 + textureIndex)
            //传纹理数据
            renderEncoder.setFragmentTexture(currentTexture.texture, index: textureIndex)
        }
        
        //传常量数据
        uniformSettings?.restoreShaderSettings(renderEncoder: renderEncoder)
        ///绘制基本图元形状
        renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)
        /// 结束编码
        renderEncoder.endEncoding()
    }
}

里面又2个方法:
1.clear 清屏
2.renderQuad传入指定的参数配置生成encoder指令

第二个方法
通过传入device, vertexFunctionName, fragmentFunctionName 来生成 MTLRenderPipelineState

//通过传入device, vertexFunctionName, fragmentFunctionName 来生成 MTLRenderPipelineState
func generateRenderPipelineState(device:MetalRenderingDevice,
                                 vertexFunctionName:String,
                                 fragmentFunctionName:String,
                                 operationName:String) -> (MTLRenderPipelineState, [String:(Int, MTLDataType)]) {
    //顶点着色器
    guard let vertexFunction = device.shaderLibrary.makeFunction(name: vertexFunctionName) else {
        fatalError("\(operationName): could not compile vertex function \(vertexFunctionName)")
    }
    
    //片段着色器
    guard let fragmentFunction = device.shaderLibrary.makeFunction(name: fragmentFunctionName) else {
        fatalError("\(operationName): could not compile fragment function \(fragmentFunctionName)")
    }
    
    let descriptor = MTLRenderPipelineDescriptor()
    descriptor.colorAttachments[0].pixelFormat = MTLPixelFormat.bgra8Unorm
    descriptor.rasterSampleCount = 1
    descriptor.vertexFunction = vertexFunction
    descriptor.fragmentFunction = fragmentFunction
    
    do {
        var reflection:MTLAutoreleasedRenderPipelineReflection?
        let pipelineState = try device.device.makeRenderPipelineState(descriptor: descriptor, options: [.bufferTypeInfo, .argumentInfo], reflection: &reflection)

        var uniformLookupTable:[String:(Int, MTLDataType)] = [:]
        if let fragmentArguments = reflection?.fragmentArguments {
            for fragmentArgument in fragmentArguments where fragmentArgument.type == .buffer {
                if
                  (fragmentArgument.bufferDataType == .struct),
                  let members = fragmentArgument.bufferStructType?.members.enumerated() {
                    for (index, uniform) in members {
                        uniformLookupTable[uniform.name] = (index, uniform.dataType)
                    }
                }
            }
        }
        
        return (pipelineState, uniformLookupTable)
    } catch {
        fatalError("Could not create render pipeline state for vertex:\(vertexFunctionName), fragment:\(fragmentFunctionName), error:\(error)")
    }
}
3.Pipeline

里面定义了一些协议,自定义操作符,生产/消费者对象

首先来看下 ImageSource

public protocol ImageSource {
    var targets:TargetContainer { get }
    func transmitPreviousImage(to target:ImageConsumer, atIndex:UInt)
}

数据源协议,具有输出功能。一般遵循的类Inputs文件下,还有滤镜,滤镜具有输入输出功能
var targets:TargetContainer 记录要输出的目标
func transmitPreviousImage 该方法传递纹理到指定的目标

再来看下 ImageSource扩展

public extension ImageSource {
    func addTarget(_ target:ImageConsumer, atTargetIndex:UInt? = nil) {
        if let targetIndex = atTargetIndex {
            target.setSource(self, atIndex:targetIndex)
            targets.append(target, indexAtTarget:targetIndex)
            transmitPreviousImage(to:target, atIndex:targetIndex)
        } else if let indexAtTarget = target.addSource(self) {
            targets.append(target, indexAtTarget:indexAtTarget)
            transmitPreviousImage(to:target, atIndex:indexAtTarget)
        } else {
            debugPrint("Warning: tried to add target beyond target's input capacity")
        }
    }

    func removeAllTargets() {
        for (target, index) in targets {
            target.removeSourceAtIndex(index)
        }
        targets.removeAll()
    }
    
    func updateTargetsWithTexture(_ texture:Texture) {
        for (target, index) in targets {
            target.newTextureAvailable(texture, fromSourceIndex:index)
        }
    }
}

addTarget:添加目标到targets里面,并且添加的时候会调用transmitPreviousImage
removeAllTargets:移除所有记录的目标
updateTargetsWithTexture: 为记录的所有目标更新纹理数据

接下来我们来看 ImageConsumer

public protocol ImageConsumer:AnyObject {
    var maximumInputs:UInt { get }
    var sources:SourceContainer { get }
    
    func newTextureAvailable(_ texture:Texture, fromSourceIndex:UInt)
}

消费者,顾名思义就是接受纹理信息的,具有输入功能。一般遵循的类Outputs文件下,还有滤镜,滤镜具有输入输出功能

maximumInputs:最大的输入
sources:记录数据的来源,就是遵循ImageSource传递过来的对象,
newTextureAvailable:接受更新的纹理数据信息

ImageConsumer的扩展,跟ImageSource差不多,是对数据来源的添加、移除的操作

public extension ImageConsumer {
    func addSource(_ source:ImageSource) -> UInt? {
        return sources.append(source, maximumInputs:maximumInputs)
    }
    
    func setSource(_ source:ImageSource, atIndex:UInt) {
        _ = sources.insert(source, atIndex:atIndex, maximumInputs:maximumInputs)
    }

    func removeSourceAtIndex(_ index:UInt) {
        sources.removeAtIndex(index)
    }
}

再来看看ImageProcessingOperation,具有输入、输出能力,那么它就是滤镜了,滤镜就遵循这个协议

public protocol ImageProcessingOperation: ImageConsumer, ImageSource {
}

GPUImage3充分发挥的swift的特性,接下来看自定义操作符

infix operator --> : AdditionPrecedence
//precedencegroup ProcessingOperationPrecedence {
//    associativity: left
////    higherThan: Multiplicative
//}
@discardableResult public func --><T:ImageConsumer>(source:ImageSource, destination:T) -> T {
    source.addTarget(destination)
    return destination
}

通过自定义操作符,将输入、输入通过-->链接起来,形成一个滤镜链
举个例子:

pictureInput = PictureInput(image: UIImage(named: "image_5.jpg")!)
stretchFilter = StretchFilter()
pictureInput --> stretchFilter --> renderView
pictureInput.processImage()

通过-->的链接,遵循ImageSourceImageConsumer就可以一直链接下去。


暂时就记录这么多,我也在探索中,仅仅以此记录自己的学习。

热爱生活,享受生活!

相关文章

  • GPUImage3(一)

    因为最新在学习Metal,所以就去看了一下GPUImage3,之前的2个版本都是基于openGL ES,到了GPU...

  • GPUImage3(二)

    上一篇学习了Base里面的几个文件。今天我学习一下OutPuts里面的几个文件。 1. RenderView 首先...

  • GPUImage2 的导入

    首先,GPUImage有3个版本分别是:GPUImage,GPUImage2,GPUImage3 GPUImage...

  • GPUImage源码阅读笔记(一)

    GUPImage是一个使用GPU加速的图像视频处理框架。最新的GPUImage已经是第三版GPUImage3。GP...

  • Swift自定义操作符(一)

    前言 最近在研究GPUImage3代码,看到如下这一段代码 上面的代码自定义了一个运算符 --> 这个运算符需要两...

  • IOS – OpenGL ES 图像加亮边缘 GPUImage3

    目录 一.简介[#%E4%B8%80%E7%AE%80%E4%BB%8B] 二.效果演示[#%E4%BA%8C%E...

  • 。一一,一,一,一。

    一,、

  • 一 一

    2018年6月22日 星期五 雨 一水一万物 一星一宇宙 一字一文章 一书一世界 一读一微笑 一赞一知音

  • 一 一

    杨德昌《一 一》,早年曾看过一遍。 婷婷短发,白净,蓝色衬衫,学生裙,黑皮鞋,白袜子,学习很好的中学女生。温柔,懂...

  • 一 一

    给自己无处安放的灵魂找到了家!简书,我的新写作时光!继续,在流年里拾荒,禅落一身的光!

网友评论

      本文标题:GPUImage3(一)

      本文链接:https://www.haomeiwen.com/subject/otufgctx.html