美文网首页 图形显示系统
Android P 图形显示系统(八) SurfaceFling

Android P 图形显示系统(八) SurfaceFling

作者: 夕月风 | 来源:发表于2018-12-05 18:40 被阅读0次

    [TOC]

    SurfaceFlinger合成流程(三)

    配置硬件合成 setUpHWComposer

    回到handleMessageRefresh,继续看Refresh消息的处理。此时需要进行合成显示的数据,在rebuildLayerStacks时,已经被更新到每个Display各自的layersSortedByZ中。Layer栈创建完成后,进行HWC 合成的设置。

    setUpHWComposer的代码比较长,我们分段看,在setUpHWComposer中,主要做了以下几件事:

    1.DisplayDevice beginFrame

    void SurfaceFlinger::setUpHWComposer() {
        ... ...
    
        for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
            bool dirty = !mDisplays[dpy]->getDirtyRegion(false).isEmpty();
            bool empty = mDisplays[dpy]->getVisibleLayersSortedByZ().size() == 0;
            bool wasEmpty = !mDisplays[dpy]->lastCompositionHadVisibleLayers;
    
            //  判断是否需要重新合成
            bool mustRecompose = dirty && !(empty && wasEmpty);
    
            ... ...
    
            mDisplays[dpy]->beginFrame(mustRecompose);
    
            if (mustRecompose) {
                mDisplays[dpy]->lastCompositionHadVisibleLayers = !empty;
            }
        }
    

    Android对每一块显示屏的处理都是分开的。这里主要是调Display的beginFrame函数。

    status_t DisplayDevice::beginFrame(bool mustRecompose) const {
        return mDisplaySurface->beginFrame(mustRecompose);
    }
    

    mDisplaySurface根据屏幕有所不同。

    主显和外显用的FramebufferSurface,需显示用的VirtualDisplaySurface,我们这里先不关虚显。

    status_t FramebufferSurface::beginFrame(bool /*mustRecompose*/) {
        return NO_ERROR;
    }
    

    FramebufferSurface在beginFrame是每一做什么过多的处理。

    回到setUpHWComposer函数

    2.创建工作列表

    void SurfaceFlinger::setUpHWComposer() {
         ... ...
    
        // build the h/w work list
        if (CC_UNLIKELY(mGeometryInvalid)) {
            mGeometryInvalid = false;
            for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
                sp<const DisplayDevice> displayDevice(mDisplays[dpy]);
                const auto hwcId = displayDevice->getHwcDisplayId();
                if (hwcId >= 0) {
                    const Vector<sp<Layer>>& currentLayers(
                            displayDevice->getVisibleLayersSortedByZ());
                    for (size_t i = 0; i < currentLayers.size(); i++) {
                        const auto& layer = currentLayers[i];
                        if (!layer->hasHwcLayer(hwcId)) {
                            if (!layer->createHwcLayer(getBE().mHwc.get(), hwcId)) {
                                layer->forceClientComposition(hwcId);
                                continue;
                            }
                        }
    
                        layer->setGeometry(displayDevice, i);
                        if (mDebugDisableHWC || mDebugRegion) {
                            layer->forceClientComposition(hwcId);
                        }
                    }
                }
            }
        }
    

    对每个Display中的每个Layer创建对应的HWC Layer,注意hwcId,只是对hwcId大于零的Layer才会创建HWC Layer。Layer的createHwcLayer函数如下:

    bool Layer::createHwcLayer(HWComposer* hwc, int32_t hwcId) {
        LOG_ALWAYS_FATAL_IF(getBE().mHwcLayers.count(hwcId) != 0,
                            "Already have a layer for hwcId %d", hwcId);
        HWC2::Layer* layer = hwc->createLayer(hwcId);
        if (!layer) {
            return false;
        }
        LayerBE::HWCInfo& hwcInfo = getBE().mHwcLayers[hwcId];
        hwcInfo.hwc = hwc;
        hwcInfo.layer = layer;
        layer->setLayerDestroyedListener(
                [this, hwcId](HWC2::Layer* /*layer*/) { getBE().mHwcLayers.erase(hwcId); });
        return true;
    }
    

    根据hwcId来创建,也就是说,HWC上层(SurfaceFlinger)的每个Layer,都会为每个Display创建一个HWC Layer。HWComposer 根据hwcId找对HWC2的Display,再通过具体的HWC2::Dispaly去创建自己的HWC Layer;

    Layer通过HWComposer来创建,

    HWC2::Layer* HWComposer::createLayer(int32_t displayId) {
        if (!isValidDisplay(displayId)) {
            ALOGE("Failed to create layer on invalid display %d", displayId);
            return nullptr;
        }
        auto display = mDisplayData[displayId].hwcDisplay;
        HWC2::Layer* layer;
        auto error = display->createLayer(&layer);
        if (error != HWC2::Error::None) {
            ALOGE("Failed to create layer on display %d: %s (%d)", displayId,
                    to_string(error).c_str(), static_cast<int32_t>(error));
            return nullptr;
        }
        return layer;
    }
    

    创建HWC Layer的实现,最终是在Vendor实现的HAL中来完成的。对应的HWC2command为HWC2_FUNCTION_CREATE_LAYER

    Error HwcHal::createLayer(Display display, Layer* outLayer)
    {
        int32_t err = mDispatch.createLayer(mDevice, display, outLayer);
        return static_cast<Error>(err);
    }
    

    创建的HWC Layer在HWC2::Dispaly中也保存了一个引用:

    * frameworks/native/services/surfaceflinger/DisplayHardware/HWC2.cpp
    
    Error Display::createLayer(Layer** outLayer)
    {
        if (!outLayer) {
            return Error::BadParameter;
        }
        hwc2_layer_t layerId = 0;
        auto intError = mComposer.createLayer(mId, &layerId);
        auto error = static_cast<Error>(intError);
        if (error != Error::None) {
            return error;
        }
    
        auto layer = std::make_unique<Layer>(
                mComposer, mCapabilities, mId, layerId);
        *outLayer = layer.get();
        mLayers.emplace(layerId, std::move(layer));
        return Error::None;
    }
    

    如果hwclayer没有创建成功,那么这一层Layer就强制用Client方式合成forceClientComposition。

    void Layer::forceClientComposition(int32_t hwcId) {
        if (getBE().mHwcLayers.count(hwcId) == 0) {
            ALOGE("forceClientComposition: no HWC layer found (%d)", hwcId);
            return;
        }
    
        getBE().mHwcLayers[hwcId].forceClientComposition = true;
    }
    

    另外,如果是Disable掉HWC合成,或者调试Region,也强制用Client方式合成:

    mDebugDisableHWC || mDebugRegion
    

    这两个调试方式可以在系统设置,开发者选项中进行设置。

    创建完hwcLayer后,设置Layer的几何尺寸:

    void Layer::setGeometry(const sp<const DisplayDevice>& displayDevice, uint32_t z)
    {
         ... ... //注意,我们这里的数据都是来源于DrawingState
        const State& s(getDrawingState());
        ... ...
        if (!isOpaque(s) || getAlpha() != 1.0f) {
            blendMode =
                    mPremultipliedAlpha ? HWC2::BlendMode::Premultiplied : HWC2::BlendMode::Coverage;
        }
        auto error = hwcLayer->setBlendMode(blendMode);
    
        // 计算displayFrame
        Rect frame{t.transform(computeBounds(activeTransparentRegion))};
        ... ...
        const Transform& tr(displayDevice->getTransform());
        Rect transformedFrame = tr.transform(frame);
        error = hwcLayer->setDisplayFrame(transformedFrame);
        ... ...
    
        // 计算sourceCrop
        FloatRect sourceCrop = computeCrop(displayDevice);
        error = hwcLayer->setSourceCrop(sourceCrop);
        ... ...
    
        // 设置Alpha
        float alpha = static_cast<float>(getAlpha());
        error = hwcLayer->setPlaneAlpha(alpha);
        ... ...
    
        // 设置z-order
        error = hwcLayer->setZOrder(z);
        ... ...
    
        int type = s.type;
        int appId = s.appId;
        sp<Layer> parent = mDrawingParent.promote();
        if (parent.get()) {
            auto& parentState = parent->getDrawingState();
            type = parentState.type;
            appId = parentState.appId;
        }
    
        // 设置Layer的信息
        error = hwcLayer->setInfo(type, appId);
        ALOGE_IF(error != HWC2::Error::None, "[%s] Failed to set info (%d)", mName.string(),
                 static_cast<int32_t>(error));
    
        // 设置transform
        const uint32_t orientation = transform.getOrientation();
        if (orientation & Transform::ROT_INVALID) {
            // we can only handle simple transformation
            hwcInfo.forceClientComposition = true;
        } else {
            auto transform = static_cast<HWC2::Transform>(orientation);
            auto error = hwcLayer->setTransform(transform);
            ... ...
        }
    }
    

    注意,我们这里的数据都是来源于DrawingState,setGeometry函数中,主要做了以下几件事:

    • 确认HWCLayer的混合模式
      混合模式,是两个Layer直接的混合方式,主要下面的几种:
    /* Blend modes, settable per layer */
    typedef enum {
        HWC2_BLEND_MODE_INVALID = 0,
    
        /* colorOut = colorSrc */
        HWC2_BLEND_MODE_NONE = 1,
    
        /* colorOut = colorSrc + colorDst * (1 - alphaSrc) */
        HWC2_BLEND_MODE_PREMULTIPLIED = 2,
    
        /* colorOut = colorSrc * alphaSrc + colorDst * (1 - alphaSrc) */
        HWC2_BLEND_MODE_COVERAGE = 3,
    } hwc2_blend_mode_t;
    

    HWC2_BLEND_MODE_NONE,不混合,源是什么样,输出就是什么样的。
    HWC2_BLEND_MODE_PREMULTIPLIED,预乘,Dst需要做Alpha的处理。
    HWC2_BLEND_MODE_COVERAGE,覆盖方式,源和Dst都需要做Alpha的处理。

    • 计算displayFrame并设置给hwcLayer
      displayFrame通过transform转换过的

    • 计算sourceCrop并设置给hwcLayer
      sourceCrop是上层传下来的,再和Dispaly,其他Layer的属性进行计算。

    • 设置Alpha值

    • 设置z-Order

    • 设置Layer的信息
      type和appId是Android Framework层创建SurfaceControl时设置的,可以搜搜"new SurfaceControl"就出来了。type是类型,比如ScreenshotSurfaceBackground等;appId是应用的进程号。

    • 设置变换信息transform

    Layer信息的设置,是通过CommandBuffer的读写来完成的,比如,设置混合模式,最终是调的HwcHal的setLayerBlendMode方法。

    Error HwcHal::setLayerBlendMode(Display display, Layer layer, int32_t mode)
    {
        int32_t err = mDispatch.setLayerBlendMode(mDevice, display, layer, mode);
        return static_cast<Error>(err);
    }
    

    回到setUpHWComposer函数

    3.设置每层Layer的Frame数据

    void SurfaceFlinger::setUpHWComposer() {
         ... ...
        mat4 colorMatrix = mColorMatrix * computeSaturationMatrix() * mDaltonizer();
    
        // Set the per-frame data
        for (size_t displayId = 0; displayId < mDisplays.size(); ++displayId) {
            auto& displayDevice = mDisplays[displayId];
            const auto hwcId = displayDevice->getHwcDisplayId();
    
            ... ... // 设置每个Dispaly的颜色矩阵
            if (colorMatrix != mPreviousColorMatrix) {
                status_t result = getBE().mHwc->setColorTransform(hwcId, colorMatrix);
                ... ...
            }
            for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
                if (layer->getForceClientComposition(hwcId)) {
                    ALOGV("[%s] Requesting Client composition", layer->getName().string());
                    layer->setCompositionType(hwcId, HWC2::Composition::Client);
                    continue;
                }
    
                layer->setPerFrameData(displayDevice);
            }
    
            if (hasWideColorDisplay) {
                android_color_mode newColorMode;
                android_dataspace newDataSpace = HAL_DATASPACE_V0_SRGB;
    
                for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
                    newDataSpace = bestTargetDataSpace(layer->getDataSpace(), newDataSpace);
                    ALOGV("layer: %s, dataspace: %s (%#x), newDataSpace: %s (%#x)",
                          layer->getName().string(), dataspaceDetails(layer->getDataSpace()).c_str(),
                          layer->getDataSpace(), dataspaceDetails(newDataSpace).c_str(), newDataSpace);
                }
                newColorMode = pickColorMode(newDataSpace);
    
                setActiveColorModeInternal(displayDevice, newColorMode);
            }
        }
    
        mPreviousColorMatrix = colorMatrix;
    

    设置Frame数据时,主要做了以下几件事:

    • 设置每个Dispaly的颜色矩阵
      可以在开发这选项中设置,模拟颜色空间。其支持的transform主要有:
    typedef enum {
        HAL_COLOR_TRANSFORM_IDENTITY = 0,
        HAL_COLOR_TRANSFORM_ARBITRARY_MATRIX = 1,
        HAL_COLOR_TRANSFORM_VALUE_INVERSE = 2,
        HAL_COLOR_TRANSFORM_GRAYSCALE = 3,
        HAL_COLOR_TRANSFORM_CORRECT_PROTANOPIA = 4,
        HAL_COLOR_TRANSFORM_CORRECT_DEUTERANOPIA = 5,
        HAL_COLOR_TRANSFORM_CORRECT_TRITANOPIA = 6,
    } android_color_transform_t;
    

    colorTransform主要是用以做颜色变换,以模拟或帮助色盲患者等。

    • 设置每一层Layer的显示数据
      setPerFrameData BufferLayer和ColorLayer的实现不一样。ColorLayer 的逻辑比较简单:
    void ColorLayer::setPerFrameData(const sp<const DisplayDevice>& displayDevice) {
        ... ...
        // 设置可见区域
        auto error = hwcLayer->setVisibleRegion(visible);
    
        // 制定合成方式
        setCompositionType(hwcId, HWC2::Composition::SolidColor);
    
        half4 color = getColor();
        // 设置颜色
        error = hwcLayer->setColor({static_cast<uint8_t>(std::round(255.0f * color.r)),
                                    static_cast<uint8_t>(std::round(255.0f * color.g)),
                                    static_cast<uint8_t>(std::round(255.0f * color.b)), 255});
        ... ...// 去掉变换矩阵
        error = hwcLayer->setTransform(HWC2::Transform::None);
        ... ...
    }
    

    ColorLayer,主要有4个操纵:设置可见区域,这个前面已经就算好了,但是这里要确保它在Dispaly的视窗里;指定合成方式,默认采用SolidColor方式合成;设置颜色,指定该Layer的颜色,RGBA的格式,Alpha默认为255,全透;最后,ColorLayer不需要transform,去掉。

    BufferLayer的setPerFrameData处理如下:

    void BufferLayer::setPerFrameData(const sp<const DisplayDevice>& displayDevice) {
        // 设置可见区域
        auto& hwcLayer = hwcInfo.layer;
        auto error = hwcLayer->setVisibleRegion(visible);
    
        ... ... //设置Damage区域
        error = hwcLayer->setSurfaceDamage(surfaceDamageRegion);
        ... ...
    
        // Sideband layers处理,默认合成类型为Sideband
        if (getBE().compositionInfo.hwc.sidebandStream.get()) {
            setCompositionType(hwcId, HWC2::Composition::Sideband);
            // 制定Sideband流
            error = hwcLayer->setSidebandStream(getBE().compositionInfo.hwc.sidebandStream->handle());
            ... ...
            return; // Sideband layers处理完成后直接返回了。
        }
    
        // 如果是Cursor Layer,合成类似为Cursor,其他为Device
        if (mPotentialCursor) {
            ALOGV("[%s] Requesting Cursor composition", mName.string());
            setCompositionType(hwcId, HWC2::Composition::Cursor);
        } else {
            ALOGV("[%s] Requesting Device composition", mName.string());
            setCompositionType(hwcId, HWC2::Composition::Device);
        }
    
        // 设置数据空间dataspace
        error = hwcLayer->setDataspace(mCurrentState.dataSpace);
        if (error != HWC2::Error::None) {
            ALOGE("[%s] Failed to set dataspace %d: %s (%d)", mName.string(), mCurrentState.dataSpace,
                  to_string(error).c_str(), static_cast<int32_t>(error));
        }
    
        // 获取GraphicBuffer
        uint32_t hwcSlot = 0;
        sp<GraphicBuffer> hwcBuffer;
        hwcInfo.bufferCache.getHwcBuffer(getBE().compositionInfo.mBufferSlot,
                                         getBE().compositionInfo.mBuffer, &hwcSlot, &hwcBuffer);
    
        //获取Fence
        auto acquireFence = mConsumer->getCurrentFence();
        // 设置Buffer
        error = hwcLayer->setBuffer(hwcSlot, hwcBuffer, acquireFence);
        if (error != HWC2::Error::None) {
            ALOGE("[%s] Failed to set buffer %p: %s (%d)", mName.string(),
                  getBE().compositionInfo.mBuffer->handle, to_string(error).c_str(),
                  static_cast<int32_t>(error));
        }
    }
    

    BufferLayer的处理比ColorLayer多,Sideband,Cursor和其他的UI图层都属于BufferLayer,每种类型Layer处理不台一样。这里比较难的是Fence的处理。Buffer也只是将Buffer的handle传给底层的HWC,并没有传Buffer里面的内容。

    Error Composer::setLayerBuffer(Display display, Layer layer,
            uint32_t slot, const sp<GraphicBuffer>& buffer, int acquireFence)
    {
        mWriter.selectDisplay(display);
        mWriter.selectLayer(layer);
        if (mIsUsingVrComposer && buffer.get()) {
            ... ...//VR
        }
    
        const native_handle_t* handle = nullptr;
        if (buffer.get()) {
            handle = buffer->getNativeBuffer()->handle;
        }
    
        mWriter.setLayerBuffer(slot, handle, acquireFence);
        return Error::NONE;
    }
    

    所以,每层Layer的数据,要么的Buffer,要么是固定的颜色。在处理每一层的数据时,还要处理widecolor。首先通过bestTargetDataSpace找到每层Layer的最佳DataSpace,然后再通过pickColorMode选取颜色模式,最后通过setActiveColorModeInternal函数设置。

    void SurfaceFlinger::setActiveColorModeInternal(const sp<DisplayDevice>& hw,
            android_color_mode_t mode) {
        int32_t type = hw->getDisplayType();
        android_color_mode_t currentMode = hw->getActiveColorMode();
    
        ... ...
    
        hw->setActiveColorMode(mode);
        getHwComposer().setActiveColorMode(type, mode);
    }
    

    回到setUpHWComposer函数

    4.prepareFrame准备数据

        for (size_t displayId = 0; displayId < mDisplays.size(); ++displayId) {
            auto& displayDevice = mDisplays[displayId];
            if (!displayDevice->isDisplayOn()) {
                continue;
            }
    
            status_t result = displayDevice->prepareFrame(*getBE().mHwc);
            ALOGE_IF(result != NO_ERROR, "prepareFrame for display %zd failed:"
                    " %d (%s)", displayId, result, strerror(-result));
        }
    }
    

    Prepare流程,现在写的有点隐晦,以前都是直接在SurfaceFlinger中调的Prepare。现在通过DisplayDevice来完成,调用每个DisplayDevice的prepareFrame。

    prepareFrame函数如下:

    status_t DisplayDevice::prepareFrame(HWComposer& hwc) {
        status_t error = hwc.prepare(*this);
        if (error != NO_ERROR) {
            return error;
        }
    
        DisplaySurface::CompositionType compositionType;
        bool hasClient = hwc.hasClientComposition(mHwcDisplayId);
        bool hasDevice = hwc.hasDeviceComposition(mHwcDisplayId);
        if (hasClient && hasDevice) {
            compositionType = DisplaySurface::COMPOSITION_MIXED;
        } else if (hasClient) {
            compositionType = DisplaySurface::COMPOSITION_GLES;
        } else if (hasDevice) {
            compositionType = DisplaySurface::COMPOSITION_HWC;
        } else {
            // Nothing to do -- when turning the screen off we get a frame like
            // this. Call it a HWC frame since we won't be doing any GLES work but
            // will do a prepare/set cycle.
            compositionType = DisplaySurface::COMPOSITION_HWC;
        }
        return mDisplaySurface->prepareFrame(compositionType);
    }
    

    设置Layer数据时,已经指定每一层的合成方式,但是那是SurfaceFlinger的一厢情愿,还得看HWC接受不接受。HWComposer的prepare函数如下:

    status_t HWComposer::prepare(DisplayDevice& displayDevice) {
        ATRACE_CALL();
    
        Mutex::Autolock _l(mDisplayLock);
        auto displayId = displayDevice.getHwcDisplayId();
        ... ...
    
        auto& displayData = mDisplayData[displayId];
        auto& hwcDisplay = displayData.hwcDisplay;
        if (!hwcDisplay->isConnected()) {
            return NO_ERROR;
        }
    
        ... ...
        if (!displayData.hasClientComposition) {
            sp<android::Fence> outPresentFence;
            uint32_t state = UINT32_MAX;
            error = hwcDisplay->presentOrValidate(&numTypes, &numRequests, &outPresentFence , &state);
            if (error != HWC2::Error::None && error != HWC2::Error::HasChanges) {
                ALOGV("skipValidate: Failed to Present or Validate");
                return UNKNOWN_ERROR;
            }
            if (state == 1) { //Present Succeeded.
                std::unordered_map<HWC2::Layer*, sp<Fence>> releaseFences;
                error = hwcDisplay->getReleaseFences(&releaseFences);
                displayData.releaseFences = std::move(releaseFences);
                displayData.lastPresentFence = outPresentFence;
                displayData.validateWasSkipped = true;
                displayData.presentError = error;
                return NO_ERROR;
            }
            // Present failed but Validate ran.
        } else {
            error = hwcDisplay->validate(&numTypes, &numRequests);
        }
        ... ...
    
        std::unordered_map<HWC2::Layer*, HWC2::Composition> changedTypes;
        changedTypes.reserve(numTypes);
        error = hwcDisplay->getChangedCompositionTypes(&changedTypes);
        ... ...
    
        displayData.displayRequests = static_cast<HWC2::DisplayRequest>(0);
        std::unordered_map<HWC2::Layer*, HWC2::LayerRequest> layerRequests;
        layerRequests.reserve(numRequests);
        error = hwcDisplay->getRequests(&displayData.displayRequests,
                &layerRequests);
        if (error != HWC2::Error::None) {
            ALOGE("prepare: getRequests failed on display %d: %s (%d)", displayId,
                    to_string(error).c_str(), static_cast<int32_t>(error));
            return BAD_INDEX;
        }
    
        displayData.hasClientComposition = false;
        displayData.hasDeviceComposition = false;
        for (auto& layer : displayDevice.getVisibleLayersSortedByZ()) {
            auto hwcLayer = layer->getHwcLayer(displayId);
    
            if (changedTypes.count(hwcLayer) != 0) {
                // We pass false so we only update our state and don't call back
                // into the HWC device
                validateChange(layer->getCompositionType(displayId),
                        changedTypes[hwcLayer]);
                layer->setCompositionType(displayId, changedTypes[hwcLayer], false);
            }
    
            switch (layer->getCompositionType(displayId)) {
                ... ...
            }
    
            if (layerRequests.count(hwcLayer) != 0 &&
                    layerRequests[hwcLayer] ==
                            HWC2::LayerRequest::ClearClientTarget) {
                layer->setClearClientTarget(displayId, true);
            } else {
                if (layerRequests.count(hwcLayer) != 0) {
                    ALOGE("prepare: Unknown layer request: %s",
                            to_string(layerRequests[hwcLayer]).c_str());
                }
                layer->setClearClientTarget(displayId, false);
            }
        }
    
        error = hwcDisplay->acceptChanges();
        ... ...
    
        return NO_ERROR;
    }
    

    prepare流程如下:

    • 首先尝试Prepare和Present一次处理完成
      如果SurfaceFlinger没有指定得有Client端合成hasClientComposition为false,首先通过presentOrValidate接口尝试直接present,如果HWC不能直接显示,再执行validate操纵,这时的流程和validate是类似的。如果成功,那么此次数据就显示了,不用再 继续后续的处理。

    • validate刷新

    Error Display::validate(uint32_t* outNumTypes, uint32_t* outNumRequests)
    {
        uint32_t numTypes = 0;
        uint32_t numRequests = 0;
        auto intError = mComposer.validateDisplay(mId, &numTypes, &numRequests);
        auto error = static_cast<Error>(intError);
        if (error != Error::None && error != Error::HasChanges) {
            return error;
        }
    
        *outNumTypes = numTypes;
        *outNumRequests = numRequests;
        return error;
    }
    

    注意Composer的 validateDisplay 函数,和其他Composer的区别:

    Error Composer::validateDisplay(Display display, uint32_t* outNumTypes,
            uint32_t* outNumRequests)
    {
        mWriter.selectDisplay(display);
        mWriter.validateDisplay();
    
        Error error = execute();
        if (error != Error::NONE) {
            return error;
        }
    
        mReader.hasChanges(display, outNumTypes, outNumRequests);
    
        return Error::NONE;
    }
    

    validateDisplay 也是通过CommandWriter写Buffer的方式调用到HWC中的,但是这里多了一个execute函数。其实,validateDisplay之前的通过,Buffer命令的调用,都还没有真正的调到HWC中,只是将命令写到了Buffer中。这里的execute才真正的调用,这里将触发HWC的服务端去解析Buffer命令,再分别去调HWC中对应的实现函数。

    比如设置 z-order的解析如下:

    bool ComposerClient::CommandReader::parseSetLayerZOrder(uint16_t length)
    {
        if (length != CommandWriterBase::kSetLayerZOrderLength) {
            return false;
        }
    
        auto err = mHal.setLayerZOrder(mDisplay, mLayer, read());
        if (err != Error::NONE) {
            mWriter.setError(getCommandLoc(), err);
        }
    
        return true;
    }
    
    • 获取HWC的validate结果
      如果SurfaceFlinger指定的合成方式HWC不能处理,通过getChangedCompositionTypes函数获取到HWC对合成方式的修改,保存在 changedTypes 中。获取LayerRequest,保存在layerRequests中。layerRequests和changedTypes都是以HWC2::Layer作为key的map。

    • 修改合成方式
      如果合成方式HWC不接受,SurfaceFlinger中修改根据HWC的反馈进行修改,也就是changedTypes中的Layer进行修改。修改的函数为setCompositionType,注意这里的callIntoHwc参数为false。

    • 响应layerRequests
      layerRequests主要是决定是否需要清楚Client端的Target,也就是Client的合成结果,留意clearClientTarget,看看后续的流程是怎么处理的。

    void Layer::setClearClientTarget(int32_t hwcId, bool clear) {
        if (getBE().mHwcLayers.count(hwcId) == 0) {
            ALOGE("setClearClientTarget called without a valid HWC layer");
            return;
        }
        getBE().mHwcLayers[hwcId].clearClientTarget = clear;
    }
    
    • 最后接受修改
      通过HWC,SurfaceFlinger接受修改。
    Error Display::acceptChanges()
    {
        auto intError = mComposer.acceptDisplayChanges(mId);
        return static_cast<Error>(intError);
    }
    

    到此setUpHWComposer结束,此时,我们需要显示的数据已经送到HWC,且每一层Layer的合成方式已经确定。如果是HWC能支持更新和显示同时完成,那么此时数据已经开始显示。

    回到handleMessageRefresh函数,接下来是doDebugFlashRegions。doDebugFlashRegions只是一个debug的功能,其目前就是更新的区域不停的闪烁,收mDebugRegion的控制。

    void SurfaceFlinger::doDebugFlashRegions()
    {
        // is debugging enabled
        if (CC_LIKELY(!mDebugRegion))
            return;
    

    }

    接下来的doTracing也是一个debug的辅助功能。

    void SurfaceFlinger::doTracing(const char* where) {
        ATRACE_CALL();
        ATRACE_NAME(where);
        if (CC_UNLIKELY(mTracing.isEnabled())) {
            mTracing.traceLayers(where, dumpProtoInfo(LayerVector::StateSet::Drawing));
        }
    }
    

    合成处理 doComposition

    如果present和validate没有一起完成,那么此时我们需要显示的数据已经送到HWC,且每一层Layer的合成方式已经确定。接下来的合成处理流程在doComposition中完成。

    doComposition函数:

    void SurfaceFlinger::doComposition() {
        ATRACE_CALL();
        ALOGV("doComposition");
    
        const bool repaintEverything = android_atomic_and(0, &mRepaintEverything);
        for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
            const sp<DisplayDevice>& hw(mDisplays[dpy]);
            if (hw->isDisplayOn()) {
                // transform the dirty region into this screen's coordinate space
                const Region dirtyRegion(hw->getDirtyRegion(repaintEverything));
    
                // repaint the framebuffer (if needed)
                doDisplayComposition(hw, dirtyRegion);
    
                hw->dirtyRegion.clear();
                hw->flip();
            }
        }
        postFramebuffer();
    }
    

    合成处理也是每个Display各自 进行的,合成处理主要步骤如下:

    1. 获取脏区域
      在前面重构Layer时,Display的脏区域dirtyRegion已经计算出来。如果是重画,mRepaintEverything为true,那么脏区域就是整个屏幕的大小。
    Region DisplayDevice::getDirtyRegion(bool repaintEverything) const {
        Region dirty;
        if (repaintEverything) {
            dirty.set(getBounds());
        } else {
            const Transform& planeTransform(mGlobalTransform);
            dirty = planeTransform.transform(this->dirtyRegion);
            dirty.andSelf(getBounds());
        }
        return dirty;
    }
    

    2.Display合成处理
    doDisplayComposition函数如下:

    void SurfaceFlinger::doDisplayComposition(
            const sp<const DisplayDevice>& displayDevice,
            const Region& inDirtyRegion)
    {
        // 需要HWC处理,或者脏区域不为空
        bool isHwcDisplay = displayDevice->getHwcDisplayId() >= 0;
        if (!isHwcDisplay && inDirtyRegion.isEmpty()) {
            ALOGV("Skipping display composition");
            return;
        }
    
        ALOGV("doDisplayComposition");
        if (!doComposeSurfaces(displayDevice)) return;
    
        // swap buffers (presentation)
        displayDevice->swapBuffers(getHwComposer());
    }
    

    合成操纵主要在doComposeSurfaces函数中完成,合成方式,主要就两种,一种Client端用GPU合成;另外一种,Device端合成,用的是HWC硬件。doComposeSurfaces主要是处理Client端合成,Client通过RenderEngine用GPU来进行合成。

    doComposeSurfaces 函数如下,我们分段看:

    • RenderEngine 的初始化
    bool SurfaceFlinger::doComposeSurfaces(const sp<const DisplayDevice>& displayDevice)
    {
        ... ...
    
        const Region bounds(displayDevice->bounds());
        const DisplayRenderArea renderArea(displayDevice);
        const auto hwcId = displayDevice->getHwcDisplayId();
    
        mat4 oldColorMatrix;
        const bool applyColorMatrix = !getBE().mHwc->hasDeviceComposition(hwcId) &&
                !getBE().mHwc->hasCapability(HWC2::Capability::SkipClientColorTransform);
        if (applyColorMatrix) {
            mat4 colorMatrix = mColorMatrix * mDaltonizer();
            oldColorMatrix = getRenderEngine().setupColorTransform(colorMatrix);
        }
    
        bool hasClientComposition = getBE().mHwc->hasClientComposition(hwcId);
        if (hasClientComposition) {
            ALOGV("hasClientComposition");
    
            getBE().mRenderEngine->setWideColor(
                    displayDevice->getWideColorSupport() && !mForceNativeColorMode);
            getBE().mRenderEngine->setColorMode(mForceNativeColorMode ?
                    HAL_COLOR_MODE_NATIVE : displayDevice->getActiveColorMode());
            if (!displayDevice->makeCurrent()) {
                ... ...
            }
    
            const bool hasDeviceComposition = getBE().mHwc->hasDeviceComposition(hwcId);
            if (hasDeviceComposition) {
                getBE().mRenderEngine->clearWithColor(0, 0, 0, 0);
            } else {
                const Region letterbox(bounds.subtract(displayDevice->getScissor()));
    
                // compute the area to clear
                Region region(displayDevice->undefinedRegion.merge(letterbox));
    
                // screen is already cleared here
                if (!region.isEmpty()) {
                    // can happen with SurfaceView
                    drawWormhole(displayDevice, region);
                }
            }
    
            if (displayDevice->getDisplayType() != DisplayDevice::DISPLAY_PRIMARY) {
    
                const Rect& bounds(displayDevice->getBounds());
                const Rect& scissor(displayDevice->getScissor());
                if (scissor != bounds) {
                    const uint32_t height = displayDevice->getHeight();
                    getBE().mRenderEngine->setScissor(scissor.left, height - scissor.bottom,
                            scissor.getWidth(), scissor.getHeight());
                }
            }
        }
    

    RenderEngine的初始化包括:

    • 指定颜色矩阵 setupColorTransform
    • 指定是否用WideColor setWideColor
    • 指定颜色模式 setColorMode
    • 设置FBTarget Surface,视窗,投影矩阵等
      这个过程在DisplayDevice的makeCurrent中完成:
    bool DisplayDevice::makeCurrent() const {
        bool success = mFlinger->getRenderEngine().setCurrentSurface(mSurface);
        setViewportAndProjection();
        return success;
    }
    
    • FBTarget 处理背景
      如果是混合模式,也就是hasClientComposition和hasDeviceComposition,先清掉FBTarget背景。一般情况,合成很少采用这种方式。基本都是通过drawWormhole将屏幕填充为RGBA_0000。

    • 设置剪切区 setScissor
      对于非主屏,通过setScissor设置Display的剪切区

    到此,初始化完成~

    • 将Client端的Layer渲染到FBTarget
    bool SurfaceFlinger::doComposeSurfaces(const sp<const DisplayDevice>& displayDevice)
    {
        ... ...
    
        const Transform& displayTransform = displayDevice->getTransform();
        if (hwcId >= 0) {
            // hwcId >=0 我们使用HWC
            bool firstLayer = true;
            for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
                const Region clip(bounds.intersect(
                        displayTransform.transform(layer->visibleRegion)));
    
                if (!clip.isEmpty()) {
                    switch (layer->getCompositionType(hwcId)) {
                        case HWC2::Composition::Cursor:
                        case HWC2::Composition::Device:
                        case HWC2::Composition::Sideband:
                        case HWC2::Composition::SolidColor: {
                            const Layer::State& state(layer->getDrawingState());
                            if (layer->getClearClientTarget(hwcId) && !firstLayer &&
                                    layer->isOpaque(state) && (state.color.a == 1.0f)
                                    && hasClientComposition) {
                                // never clear the very first layer since we're
                                // guaranteed the FB is already cleared
                                layer->clearWithOpenGL(renderArea);
                            }
                            break;
                        }
                        case HWC2::Composition::Client: {
                            layer->draw(renderArea, clip);
                            break;
                        }
                        default:
                            break;
                    }
                } else {
                    ALOGV("  Skipping for empty clip");
                }
                firstLayer = false;
            }
        } else {
            // we're not using h/w composer
            for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
                const Region clip(bounds.intersect(
                        displayTransform.transform(layer->visibleRegion)));
                if (!clip.isEmpty()) {
                    layer->draw(renderArea, clip);
                }
            }
        }
    
        if (applyColorMatrix) {
            getRenderEngine().setupColorTransform(oldColorMatrix);
        }
    
        // disable scissor at the end of the frame
        getBE().mRenderEngine->disableScissor();
        return true;
    }
    

    hwcId >= 0说明我们用到了HWC,大多数情况都会走到这里。对于很多VirtualDisplay的情况,hwcId为-1。

    用到hwc时,首先计算每一层Layer的可见区域在Display中的区域clip,如果Layer是Client合成,那么直接调layer->draw,将Layer中clip区域绘制到FBTarget上。如果不是Client合成,但是有其他Layer是Client合成时,需要将Layer在 FBTarget中对应的区域清理掉clearWithOpenGL,清理掉的区域HWC合成。最终,FBTarget的内容和HWC中的内容再合成为最后的显示数据。

    没有用到hwc时,直接调layer->draw,将Layer中clip区域绘制到FBTarget上。

    到此,doComposeSurfaces完成,这里主要是处理和Client端合成相关的流程~

    3.Display 交换Buffer

    void DisplayDevice::swapBuffers(HWComposer& hwc) const {
        if (hwc.hasClientComposition(mHwcDisplayId)) {
            mSurface.swapBuffers();
        }
    
        status_t result = mDisplaySurface->advanceFrame();
        if (result != NO_ERROR) {
            ALOGE("[%s] failed pushing new frame to HWC: %d",
                    mDisplayName.string(), result);
        }
    }
    

    如果有Client合成,调eglSwapBuffers交换Buffer

    void Surface::swapBuffers() const {
        if (!eglSwapBuffers(mEGLDisplay, mEGLSurface)) {
            ... ...
        }
    }
    

    mDisplaySurface的advanceFrame方法,虚显用的VirtualDisplaySurface,非虚显用的FramebufferSurface。advanceFrame获取 FBTarget 的数据,我们看非虚显:

    status_t FramebufferSurface::advanceFrame() {
        uint32_t slot = 0;
        sp<GraphicBuffer> buf;
        sp<Fence> acquireFence(Fence::NO_FENCE);
        android_dataspace_t dataspace = HAL_DATASPACE_UNKNOWN;
        status_t result = nextBuffer(slot, buf, acquireFence, dataspace);
        mDataSpace = dataspace;
        if (result != NO_ERROR) {
            ALOGE("error latching next FramebufferSurface buffer: %s (%d)",
                    strerror(-result), result);
        }
        return result;
    }
    

    主要在nextBuffer函数中完成:

    status_t FramebufferSurface::nextBuffer(uint32_t& outSlot,
            sp<GraphicBuffer>& outBuffer, sp<Fence>& outFence,
            android_dataspace_t& outDataspace) {
        Mutex::Autolock lock(mMutex);
    
        BufferItem item;
        status_t err = acquireBufferLocked(&item, 0);
        ... ...
        if (mCurrentBufferSlot != BufferQueue::INVALID_BUFFER_SLOT &&
            item.mSlot != mCurrentBufferSlot) {
            mHasPendingRelease = true;
            mPreviousBufferSlot = mCurrentBufferSlot;
            mPreviousBuffer = mCurrentBuffer;
        }
        mCurrentBufferSlot = item.mSlot;
        mCurrentBuffer = mSlots[mCurrentBufferSlot].mGraphicBuffer;
        mCurrentFence = item.mFence;
    
        outFence = item.mFence;
        mHwcBufferCache.getHwcBuffer(mCurrentBufferSlot, mCurrentBuffer,
                &outSlot, &outBuffer);
        outDataspace = item.mDataSpace;
        status_t result =
                mHwc.setClientTarget(mDisplayType, outSlot, outFence, outBuffer, outDataspace);
        ... ...
    }
    

    nextBuffer函数中:

    • 获取一个Buffer
      如果是Client合成,swapBuffer时,将调用queueBuffer,queue到FrameBufferSurface的BufferQueue中。这里的acquireBufferLocked 将从BufferQueue中获取一个Buffer。

    • 替换Buffer
      当前Buffer的序号mCurrentBufferSlot,当前Buffer mCurrentBuffer,对应的Fence mCurrentFence;如果新获取到的Buffer不一样,释放旧的。Buffer都被cache到mHwcBufferCache中。

    • 将 FBTarget 设置给HWC
      关键代码mHwc.setClientTarget

    status_t HWComposer::setClientTarget(int32_t displayId, uint32_t slot,
            const sp<Fence>& acquireFence, const sp<GraphicBuffer>& target,
            android_dataspace_t dataspace) {
        ... ...
    
        auto& hwcDisplay = mDisplayData[displayId].hwcDisplay;
        auto error = hwcDisplay->setClientTarget(slot, target, acquireFence, dataspace);
        if (error != HWC2::Error::None) {
            ALOGE("Failed to set client target for display %d: %s (%d)", displayId,
                    to_string(error).c_str(), static_cast<int32_t>(error));
            return BAD_VALUE;
        }
    
        return NO_ERROR;
    }
    

    FBTarget 也是通过Command Buffer的方式传到HWC中的。在hwc1.x的版本中,在创建工作列表时也为FBTarget创建了一个Layer,HWC2版本直接传递FBTarget。

    回到doComposition函数中,DisplayDevice的flip函数,将记录flip的次数mPageFlipCount。

    void DisplayDevice::flip() const
    {
        mFlinger->getRenderEngine().checkErrors();
        mPageFlipCount++;
    }
    

    4.提交Framebuffer
    到此,我们显示的数据成什么样了?需要Client合成的,已经合成完了,合成后的结果FBTarget已传给HWC。需要Device合成的数据之前也提交给HWC了。但是数据还没有最终合成显示出来。postFramebuffer 函数就是告诉HWC开始做最后的合成了。

    postFramebuffer函数如下:

    void SurfaceFlinger::postFramebuffer()
    {
        ... ...
    
        for (size_t displayId = 0; displayId < mDisplays.size(); ++displayId) {
            auto& displayDevice = mDisplays[displayId];
            if (!displayDevice->isDisplayOn()) {
                continue;
            }
            const auto hwcId = displayDevice->getHwcDisplayId();
            if (hwcId >= 0) {
                getBE().mHwc->presentAndGetReleaseFences(hwcId);
            }
            displayDevice->onSwapBuffersCompleted();
            displayDevice->makeCurrent();
            for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
                auto hwcLayer = layer->getHwcLayer(hwcId);
                sp<Fence> releaseFence = getBE().mHwc->getLayerReleaseFence(hwcId, hwcLayer);
    
                if (layer->getCompositionType(hwcId) == HWC2::Composition::Client) {
                    releaseFence = Fence::merge("LayerRelease", releaseFence,
                            displayDevice->getClientTargetAcquireFence());
                }
    
                layer->onLayerDisplayed(releaseFence);
            }
    
            if (!displayDevice->getLayersNeedingFences().isEmpty()) {
                sp<Fence> presentFence = getBE().mHwc->getPresentFence(hwcId);
                for (auto& layer : displayDevice->getLayersNeedingFences()) {
                    layer->onLayerDisplayed(presentFence);
                }
            }
    
            if (hwcId >= 0) {
                getBE().mHwc->clearReleaseFences(hwcId);
            }
        }
    
        mLastSwapBufferTime = systemTime() - now;
        mDebugInSwapBuffers = 0;
    
        // |mStateLock| not needed as we are on the main thread
        uint32_t flipCount = getDefaultDisplayDeviceLocked()->getPageFlipCount();
        if (flipCount % LOG_FRAME_STATS_PERIOD == 0) {
            logFrameStats();
        }
    }
    

    postFramebuffer的流程如下:

    • 通过presentAndGetReleaseFences显示获取releaseFence
    status_t HWComposer::presentAndGetReleaseFences(int32_t displayId) {
        ... ...
    
        auto& displayData = mDisplayData[displayId];
        auto& hwcDisplay = displayData.hwcDisplay;
    
        if (displayData.validateWasSkipped) {
            // explicitly flush all pending commands
            auto error = mHwcDevice->flushCommands();
            ... ...
        }
    
        auto error = hwcDisplay->present(&displayData.lastPresentFence);
        if (error != HWC2::Error::None) {
            ... ...
        }
    
        std::unordered_map<HWC2::Layer*, sp<Fence>> releaseFences;
        error = hwcDisplay->getReleaseFences(&releaseFences);
        if (error != HWC2::Error::None) {
            ... ...
        }
    
        displayData.releaseFences = std::move(releaseFences);
    
        return NO_ERROR;
    }
    

    如果之前Prepare过程中,presentOrValidate成功,validateWasSkipped为 true,那么直接刷掉command Buffer中的命令,让你执行。就没有后续的处理了,presentOrValidate成功,是没有Client合成的,也就没有所谓的FBTarget。

    如果之前presentOrValidate没有成功,很有可能是需要Client端做合成的,也就是present没有完成,那么这里需要走present的流程。

    present操纵也是先写到Command Buffer中。最后调的execute。

    Error Composer::presentDisplay(Display display, int* outPresentFence)
    {
        mWriter.selectDisplay(display);
        mWriter.presentDisplay();
    
        Error error = execute();
        if (error != Error::NONE) {
            return error;
        }
    
        mReader.takePresentFence(display, outPresentFence);
    
        return Error::NONE;
    }
    

    设置FBTarget时,只是写到Buffer中,execute时,设置FBTarget的操纵才一起生效,设置到了Server端。Server再进行最后的合成。

    present完成后,通过getReleaseFences获取releaseFence,保存在displayData中。注意这里的release是每个Layer的release fence,这是8.0之前的版本没有的流程,之前的releasefence只有一个,所以Layer只有一个。而present时的lastPresentFence就是FBTarget的releasefence。

    回到postFramebuffer函数。

    • DisplayDevice处理FBTarget
      释放上一帧的Buffer
    void DisplayDevice::onSwapBuffersCompleted() const {
        mDisplaySurface->onFrameCommitted();
    }
    

    主要在onFrameCommitted函数中完成:

    void FramebufferSurface::onFrameCommitted() {
        if (mHasPendingRelease) {
            sp<Fence> fence = mHwc.getPresentFence(mDisplayType);
            if (fence->isValid()) {
                status_t result = addReleaseFence(mPreviousBufferSlot,
                        mPreviousBuffer, fence);
                ... ...
            }
            status_t result = releaseBufferLocked(mPreviousBufferSlot, mPreviousBuffer);
            ... ...
    
            mPreviousBuffer.clear();
            mHasPendingRelease = false;
        }
    }
    

    makeCurrent为新的合成做准备

    bool DisplayDevice::makeCurrent() const {
        bool success = mFlinger->getRenderEngine().setCurrentSurface(mSurface);
        setViewportAndProjection();
        return success;
    }
    
    • 给每一层Layer设置releaseFence
    void BufferLayer::onLayerDisplayed(const sp<Fence>& releaseFence) {
        mConsumer->setReleaseFence(releaseFence);
    }
    

    如果Layer需要Fence,给它presentFence,也就是FBTarget的Fence。最后清除HWComposer的mDisplayData中的releaseFence,因为他们已经传给Layer去了。

    到此合成处理结束~

    合成后处理 postComposition

    void SurfaceFlinger::postComposition(nsecs_t refreshStartTime)
    {
        // Release any buffers which were replaced this frame
        nsecs_t dequeueReadyTime = systemTime();
        for (auto& layer : mLayersWithQueuedFrames) {
            layer->releasePendingBuffer(dequeueReadyTime);
        }
    
        // 处理Timeline
        ... ...
    
        // 记录Buffer状态
        mDrawingState.traverseInZOrder([&](Layer* layer) {
            bool frameLatched = layer->onPostComposition(glCompositionDoneFenceTime,
                    presentFenceTime, compositorTiming);
            if (frameLatched) {
                recordBufferingStats(layer->getName().string(),
                        layer->getOccupancyHistory(false));
            }
        });
    
        // Vsync的同步
        if (presentFenceTime->isValid()) {
            if (mPrimaryDispSync.addPresentFence(presentFenceTime)) {
                enableHardwareVsync();
            } else {
                disableHardwareVsync(false);
            }
        }
    
        if (!hasSyncFramework) {
            if (hw->isDisplayOn()) {
                enableHardwareVsync();
            }
        }
    
        // 动画合成处理
        if (mAnimCompositionPending) {
            mAnimCompositionPending = false;
    
            if (presentFenceTime->isValid()) {
                mAnimFrameTracker.setActualPresentFence(
                        std::move(presentFenceTime));
            } else {
                // The HWC doesn't support present fences, so use the refresh
                // timestamp instead.
                nsecs_t presentTime =
                        getBE().mHwc->getRefreshTimestamp(HWC_DISPLAY_PRIMARY);
                mAnimFrameTracker.setActualPresentTime(presentTime);
            }
            mAnimFrameTracker.advanceFrame();
        }
    
        // 时间记录
    }
    

    中主要做了下列几件事:

    • 释放待释放的Buffer
      这一帧合成完成后,将被替代的Buffer释放掉~
    void BufferLayer::releasePendingBuffer(nsecs_t dequeueReadyTime) {
        if (!mConsumer->releasePendingBuffer()) {
            return;
        }
    
        auto releaseFenceTime =
                std::make_shared<FenceTime>(mConsumer->getPrevFinalReleaseFence());
        mReleaseTimeline.updateSignalTimes();
        mReleaseTimeline.push(releaseFenceTime);
    
        Mutex::Autolock lock(mFrameEventHistoryMutex);
        if (mPreviousFrameNumber != 0) {
            mFrameEventHistory.addRelease(mPreviousFrameNumber, dequeueReadyTime,
                                          std::move(releaseFenceTime));
        }
    }
    
    • 处理Timeline

    • 记录Buffer状态

    void SurfaceFlinger::recordBufferingStats(const char* layerName,
            std::vector<OccupancyTracker::Segment>&& history) {
        Mutex::Autolock lock(getBE().mBufferingStatsMutex);
        auto& stats = getBE().mBufferingStats[layerName];
        for (const auto& segment : history) {
            if (!segment.usedThirdBuffer) {
                stats.twoBufferTime += segment.totalTime;
            }
            if (segment.occupancyAverage < 1.0f) {
                stats.doubleBufferedTime += segment.totalTime;
            } else if (segment.occupancyAverage < 2.0f) {
                stats.tripleBufferedTime += segment.totalTime;
            }
            ++stats.numSegments;
            stats.totalTime += segment.totalTime;
        }
    }
    
    • Vsync的同步
      平常我们用的Vsync都是mPrimaryDispSync分发出来的,并不是每一次都是从底层硬件上报的,所以mPrimaryDispSync需要和底层的硬件Vsync保持同步

    • 动画合成处理

    • 处理时间的记录

    到此一次合成处理完成,REFRESH处理完成。下一个Vsync到来时,新的一次合成又将开始。

    Client合成

    硬件HWC合成是Vendor实现的,各个Vendor不一样。而Client合成是Android自带的,我们接下来就来看看Android的Client端的合成。

    Client端合成,本质是采用GPU进程合成,SurfaceFlinger中封装了RenderEngine进行具体的实现,相关的代码在如下位置:

    frameworks/native/services/surfaceflinger/RenderEngine
    

    我们来看看看相关的类:


    RenderEngine
    • RenderEngine 是对GPU渲染的封装,包括了 EGLDisplay,EGLContext, EGLConfig,EGLSurface。注意每个Display的EGLSurface不是同一个,各自有各自的EGLSurface。

    • GLES20RenderEngine 继承RenderEngine,是GELS的2.0版本实现。其Program采用ProgramCache进行cache。状态用Description进描述。

    • 每个BufferLayer 都有专门的Texture进行纹理的描述,GLES20RenderEngine 支持纹理贴图。合成时,将GraphicBuffer转换为纹理,进行混合。

    下面我们来看具体的流程,Client端GPU合成相关的流程如下:

    1.创建 RenderEngine
    RenderEngine 是在SurfaceFlinger初始化时,创建的。

    void SurfaceFlinger::init() {
        ... ...
        getBE().mRenderEngine = RenderEngine::create(HAL_PIXEL_FORMAT_RGBA_8888,
                hasWideColorDisplay ? RenderEngine::WIDE_COLOR_SUPPORT : 0);
    

    create函数如下:

    std::unique_ptr<RenderEngine> RenderEngine::create(int hwcFormat, uint32_t featureFlags) {
        // 初始化EGLDisplay
        EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
        if (!eglInitialize(display, NULL, NULL)) {
            LOG_ALWAYS_FATAL("failed to initialize EGL");
        }
    
        // GLExtensions处理
    
        EGLint renderableType = 0;
        if (config == EGL_NO_CONFIG) {
            renderableType = EGL_OPENGL_ES2_BIT;
        } else if (!eglGetConfigAttrib(display, config, EGL_RENDERABLE_TYPE, &renderableType)) {
            LOG_ALWAYS_FATAL("can't query EGLConfig RENDERABLE_TYPE");
        }
        EGLint contextClientVersion = 0;
        if (renderableType & EGL_OPENGL_ES2_BIT) {
            contextClientVersion = 2;
        } else if (renderableType & EGL_OPENGL_ES_BIT) {
            contextClientVersion = 1;
        } else {
            LOG_ALWAYS_FATAL("no supported EGL_RENDERABLE_TYPEs");
        }
    
        // 初始化Attributes
        std::vector<EGLint> contextAttributes;
        ... ...
    
        // 创建EGLContext
        EGLContext ctxt = eglCreateContext(display, config, NULL, contextAttributes.data());
    
        ... ...
    
        // 创建PBuffer
        EGLint attribs[] = {EGL_WIDTH, 1, EGL_HEIGHT, 1, EGL_NONE, EGL_NONE};
        EGLSurface dummy = eglCreatePbufferSurface(display, dummyConfig, attribs);
    
        // 
        EGLBoolean success = eglMakeCurrent(display, dummy, dummy, ctxt);
        LOG_ALWAYS_FATAL_IF(!success, "can't make dummy pbuffer current");
    
        ... ...
    
        std::unique_ptr<RenderEngine> engine;
        switch (version) {
            ... ...
            case GLES_VERSION_3_0:
                engine = std::make_unique<GLES20RenderEngine>(featureFlags);
                break;
        }
        // 设置EGL信息
        engine->setEGLHandles(display, config, ctxt);
    
        ... ...
    
        eglMakeCurrent(display, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT);
        eglDestroySurface(display, dummy);
    
        return engine;
    }
    

    RenderEngine的初始化过程,就是GPU渲染初始化的过程,做过OpenGL编程的同学来说,小case。其大概的流程如下:

    • 创建 EGLDisplay
      eglGetDisplay

    • 初始化 EGLDisplay
      eglInitialize

    • 选择 EGLConfig
      chooseEglConfig

    • 获取renderableType
      eglGetConfigAttrib

    • 初始化Context属性
      contextAttributes

    • 创建EGLContext
      eglCreateContext

    • 创建 PBuffer
      eglCreatePbufferSurface

    • MakeCurrent
      eglMakeCurrent这是为虚拟的PBuffercheck状态。

    • 创建RenderEngine
      这里,目前值支持GELS2.0,对应的Render GLES20RenderEngine

    • 设置设置EGL信息
      将创建的EGL对象设置到我们创建的GLES20RenderEngine中。

    void RenderEngine::setEGLHandles(EGLDisplay display, EGLConfig config, EGLContext ctxt) {
        mEGLDisplay = display;
        mEGLConfig = config;
        mEGLContext = ctxt;
    }
    

    2.创建Surface FBTarget
    在RenderEngine创建时,初始化了EGLDisplaym,EGLConfig,EGLContext。这些都是所有Display共用的,但是Surface每个Display的是自己的。

    在DisplayDevice创建时,创建对应的Surface

    DisplayDevice::DisplayDevice(
           ... ...
          mSurface{flinger->getRenderEngine()},
          ... ...
    {
        // clang-format on
        Surface* surface;
        mNativeWindow = surface = new Surface(producer, false);
        ANativeWindow* const window = mNativeWindow.get();
    
        ... ...
        mSurface.setCritical(mType == DisplayDevice::DISPLAY_PRIMARY);
        mSurface.setAsync(mType >= DisplayDevice::DISPLAY_VIRTUAL);
        mSurface.setNativeWindow(window);
        mDisplayWidth = mSurface.queryWidth();
        mDisplayHeight = mSurface.queryHeight();
    
        ... ...
    
        if (useTripleFramebuffer) {
            surface->allocateBuffers();
        }
    }
    

    注意mSurface.setNativeWindow,通过ANativeWindow,Surface就和DisplayDevice的BufferQueue建立了联系。

    void Surface::setNativeWindow(ANativeWindow* window) {
        if (mEGLSurface != EGL_NO_SURFACE) {
            eglDestroySurface(mEGLDisplay, mEGLSurface);
            mEGLSurface = EGL_NO_SURFACE;
        }
    
        mWindow = window;
        if (mWindow) {
            mEGLSurface = eglCreateWindowSurface(mEGLDisplay, mEGLConfig, mWindow, nullptr);
        }
    }
    

    创建的EGLSurface mEGLSurface和nativewindow mWindow 关联。这个GPU就可以通过nativewindow,从BufferQueue中dequeue Buffer进行渲染,swapBuffer时,也queue到Bufferqueu中。这里的ANativeWindow,本质就是 FBTarget。

    1. 创建Texture
      BufferLayer创建时,创建Texture:
    BufferLayer::BufferLayer(SurfaceFlinger* flinger, const sp<Client>& client, const String8& name,
                             uint32_t w, uint32_t h, uint32_t flags)
          : Layer(flinger, client, name, w, h, flags),
            ... ...
    
        mFlinger->getRenderEngine().genTextures(1, &mTextureName);
        mTexture.init(Texture::TEXTURE_EXTERNAL, mTextureName);
    }
    

    通过glGenTextures函数创建Texture。

    void RenderEngine::genTextures(size_t count, uint32_t* names) {
        glGenTextures(count, names);
    }
    

    且在创建BufferLayerConsumer时,传到了Consumer中,对应的值为mTexName。

    glGenTextures生成的Texture,在BufferLayer中,保存在mTexture中。

    4.开始合成 doComposeSurfaces
    合成是在SurfaceFlinger的doComposeSurfaces中进的,首先先makeCurrent。每个Display有自己的Surface,所以,每个Display做具体合成时,需要给RenderEngine指定Surface,视窗,投影矩阵等,告诉RenderEngine合成到哪个Surface上。

    bool DisplayDevice::makeCurrent() const {
        bool success = mFlinger->getRenderEngine().setCurrentSurface(mSurface);
        setViewportAndProjection();
        return success;
    }
    

    setCurrentSurface函数如下:

    bool RenderEngine::setCurrentSurface(const RE::Surface& surface) {
        bool success = true;
        EGLSurface eglSurface = surface.getEGLSurface();
        if (eglSurface != eglGetCurrentSurface(EGL_DRAW)) {
            success = eglMakeCurrent(mEGLDisplay, eglSurface, eglSurface, mEGLContext) == EGL_TRUE;
            if (success && surface.getAsync()) {
                eglSwapInterval(mEGLDisplay, 0);
            }
        }
    
        return success;
    }
    

    GPU不支持多线程,所以需要通过eglMakeCurrent切换GPU的工作线程,eglMakeCurrent后,GPU将处理我们当前线程的OpenGL绘图操纵。

    5.Layer的合成
    合成时,每个Display的每个Layer都合成到Display对应的Surface上。主要是在Layer的draw方法中完成:

    void Layer::draw(const RenderArea& renderArea, const Region& clip) const {
        onDraw(renderArea, clip, false);
    }
    

    BufferLayer和ColorLayer实现各自的onDraw方法。我们先来看BufferLayer,BufferLayer比较复杂。

    BufferLayer的合成onDraw处理流程如下:

    • 绑定Texture
    void BufferLayer::onDraw(const RenderArea& renderArea, const Region& clip,
                             bool useIdentityTransform) const {
        ATRACE_CALL();
    
        if (CC_UNLIKELY(getBE().compositionInfo.mBuffer == 0)) {
            ... ...
            return;
        }
    
        // 绑定Texture
        status_t err = mConsumer->bindTextureImage();
        ... ...
    
    status_t BufferLayerConsumer::bindTextureImage() {
        Mutex::Autolock lock(mMutex);
        return bindTextureImageLocked();
    }
    

    绑定Texture主要在bindTextureImageLocked中完成:

    status_t BufferLayerConsumer::bindTextureImageLocked() {
        mRE.checkErrors();
    
        if (mCurrentTexture == BufferQueue::INVALID_BUFFER_SLOT && mCurrentTextureImage == NULL) {
            ... ...
            return NO_INIT;
        }
    
        const Rect& imageCrop = canUseImageCrop(mCurrentCrop) ? mCurrentCrop : Rect::EMPTY_RECT;
        status_t err = mCurrentTextureImage->createIfNeeded(imageCrop);
        if (err != NO_ERROR) {
            ... ...
            return UNKNOWN_ERROR;
        }
    
        mRE.bindExternalTextureImage(mTexName, mCurrentTextureImage->image());
    
        return doFenceWaitLocked();
    }
    

    mCurrentTextureImage是合成开始时,acquireBuffer时更新的。通过createIfNeeded创建Image。

    status_t BufferLayerConsumer::Image::createIfNeeded(const Rect& imageCrop) {
        const int32_t cropWidth = imageCrop.width();
        const int32_t cropHeight = imageCrop.height();
        if (mCreated && mCropWidth == cropWidth && mCropHeight == cropHeight) {
            return OK;
        }
    
        mCreated = mImage.setNativeWindowBuffer(mGraphicBuffer->getNativeBuffer(),
                                                mGraphicBuffer->getUsage() & GRALLOC_USAGE_PROTECTED,
                                                cropWidth, cropHeight);
        if (mCreated) {
            ... ...
    
        return mCreated ? OK : UNKNOWN_ERROR;
    }
    

    image的创建在setNativeWindowBuffer函数中完成:

    bool Image::setNativeWindowBuffer(ANativeWindowBuffer* buffer, bool isProtected, int32_t cropWidth,
                                      int32_t cropHeight) {
        if (mEGLImage != EGL_NO_IMAGE_KHR) {
            ... //release pre mEGLImage
        }
    
        if (buffer) {
            std::vector<EGLint> attrs = buildAttributeList(isProtected, cropWidth, cropHeight);
            mEGLImage = eglCreateImageKHR(mEGLDisplay, EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID,
                                          static_cast<EGLClientBuffer>(buffer), attrs.data());
            if (mEGLImage == EGL_NO_IMAGE_KHR) {
                ALOGE("failed to create EGLImage: %#x", eglGetError());
                return false;
            }
        }
    
        return true;
    }
    

    setNativeWindowBuffer时,先释放掉旧的mEGLImage。再创建新的mEGLImage。注意eglCreateImageKHR的参数。这里的buffer就是我们acquireBuffer时,获取到的GraphicBuffer。eglCreateImageKHR函数根据GraphicBuffer创建了一个mEGLImage。

    回到bindTextureImageLocked函数,创建的EglImage通过bindExternalTextureImage函数绑定。

    void RenderEngine::bindExternalTextureImage(uint32_t texName, const RE::Image& image) {
        const GLenum target = GL_TEXTURE_EXTERNAL_OES;
    
        glBindTexture(target, texName);
        if (image.getEGLImage() != EGL_NO_IMAGE_KHR) {
            glEGLImageTargetTexture2DOES(target, static_cast<GLeglImageOES>(image.getEGLImage()));
        }
    }
    

    最终通过glEGLImageTargetTexture2DOES函数,将创建的EglImage和Texture mTexName进行绑定。这样我们的Layer数据就送到了GPU进行处理。

    回到onDraw方法:

    • DRM处理
      如果是受保护的内容,或者是Secure的内容想显示在非安全的Display上,都是不允许的。这个时候,相关的区域显示为黑色
    void GLES20RenderEngine::setupLayerBlackedOut() {
        glBindTexture(GL_TEXTURE_2D, mProtectedTexName);
        Texture texture(Texture::TEXTURE_2D, mProtectedTexName);
        texture.setDimensions(1, 1); // FIXME: we should get that from somewhere
        mState.setTexture(texture);
    }
    
    • 获取textureMatrix
    void BufferLayer::onDraw(const RenderArea& renderArea, const Region& clip,
                             bool useIdentityTransform) const {
        ... ...
        bool blackOutLayer = isProtected() || (isSecure() && !renderArea.isSecure());
    
        RenderEngine& engine(mFlinger->getRenderEngine());
    
        if (!blackOutLayer) {
            const bool useFiltering = getFiltering() || needsFiltering(renderArea) || isFixedSize();
    
            // Query the texture matrix given our current filtering mode.
            float textureMatrix[16];
            mConsumer->setFilteringEnabled(useFiltering);
            mConsumer->getTransformMatrix(textureMatrix);
    
            if (getTransformToDisplayInverse()) {
                // 处理Inverse翻转
            }
    
            // Set things up for texturing.
            mTexture.setDimensions(getBE().compositionInfo.mBuffer->getWidth(),
                                   getBE().compositionInfo.mBuffer->getHeight());
            mTexture.setFiltering(useFiltering);
            mTexture.setMatrix(textureMatrix);
    
            engine.setupLayerTexturing(mTexture);
        } else {
            engine.setupLayerBlackedOut();
        }
        drawWithOpenGL(renderArea, useIdentityTransform);
        engine.disableTexturing();
    }
    

    textureMatrix是在GLConsumer::computeTransformMatrix中计算的,感兴趣的可以去看看。

    • 用OpenGL绘制
      主要通过drawWithOpenGL函数完成:
    void BufferLayer::drawWithOpenGL(const RenderArea& renderArea, bool useIdentityTransform) const {
        const State& s(getDrawingState());
    
         //计算区域边界,获取Mesh
        computeGeometry(renderArea, getBE().mMesh, useIdentityTransform);
    
        const Rect bounds{computeBounds()}; // Rounds from FloatRect
    
        Transform t = getTransform();
        Rect win = bounds;
        if (!s.finalCrop.isEmpty()) {
             ... ... //处理finalCrop
        }
    
        float left = float(win.left) / float(s.active.w);
        float top = float(win.top) / float(s.active.h);
        float right = float(win.right) / float(s.active.w);
        float bottom = float(win.bottom) / float(s.active.h);
    
        // 计算Texture的坐标顶点
        Mesh::VertexArray<vec2> texCoords(getBE().mMesh.getTexCoordArray<vec2>());
        texCoords[0] = vec2(left, 1.0f - top);
        texCoords[1] = vec2(left, 1.0f - bottom);
        texCoords[2] = vec2(right, 1.0f - bottom);
        texCoords[3] = vec2(right, 1.0f - top);
    
        RenderEngine& engine(mFlinger->getRenderEngine());
        engine.setupLayerBlending(mPremultipliedAlpha, isOpaque(s), false /* disableTexture */,
                                  getColor());
        engine.setSourceDataSpace(mCurrentState.dataSpace);
        engine.drawMesh(getBE().mMesh);
        engine.disableBlending();
    }
    

    setupLayerBlending处理Alpha的Blend:

    void GLES20RenderEngine::setupLayerBlending(bool premultipliedAlpha, bool opaque,
                                                bool disableTexture, const half4& color) {
        mState.setPremultipliedAlpha(premultipliedAlpha);
        mState.setOpaque(opaque);
        mState.setColor(color);
    
        if (disableTexture) {
            mState.disableTexture();
        }
    
        if (color.a < 1.0f || !opaque) {
            glEnable(GL_BLEND);
            glBlendFunc(premultipliedAlpha ? GL_ONE : GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
        } else {
            glDisable(GL_BLEND);
        }
    }
    

    drawMesh绘制内容:

    void GLES20RenderEngine::drawMesh(const Mesh& mesh) {
        if (mesh.getTexCoordsSize()) {
            glEnableVertexAttribArray(Program::texCoords);
            glVertexAttribPointer(Program::texCoords, mesh.getTexCoordsSize(), GL_FLOAT, GL_FALSE,
                                  mesh.getByteStride(), mesh.getTexCoords());
        }
    
        glVertexAttribPointer(Program::position, mesh.getVertexSize(), GL_FLOAT, GL_FALSE,
                              mesh.getByteStride(), mesh.getPositions());
    
        if (usesWideColor()) {
            Description wideColorState = mState;
            if (mDataSpace != HAL_DATASPACE_DISPLAY_P3) {
                ... ...
            }
            ProgramCache::getInstance().useProgram(wideColorState);
    
            glDrawArrays(mesh.getPrimitive(), 0, mesh.getVertexCount());
    
            if (outputDebugPPMs) {
                ... ...
            }
        } else {
            ProgramCache::getInstance().useProgram(mState);
    
            glDrawArrays(mesh.getPrimitive(), 0, mesh.getVertexCount());
        }
    
        if (mesh.getTexCoordsSize()) {
            glDisableVertexAttribArray(Program::texCoords);
        }
    }
    

    glDrawArrays绘制~

    所有Layer都绘制完成后,swapBuffer

    6.交换Buffer
    Surface交换Buffer

    void Surface::swapBuffers() const {
        if (!eglSwapBuffers(mEGLDisplay, mEGLSurface)) {
            ... ...
        }
    }
    

    eglSwapBuffers 将交换GPU处理的Buffer,处理完的Buffer,也就是包含Layer合成数据后的Buffer将被queue到BufferQueue中。

    前面已经说过advanceFrame时,将acquireBuffer,通过setClientTarget给HWC设置Client端的合成结果,传给底层进行显示。

    以上就是Client端的合成处理。

    相关文章

      网友评论

        本文标题:Android P 图形显示系统(八) SurfaceFling

        本文链接:https://www.haomeiwen.com/subject/olwwcqtx.html