美文网首页
MTK 双摄算法集成

MTK 双摄算法集成

作者: 程序员Android1 | 来源:发表于2021-07-08 09:00 被阅读0次

    和你一起终身学习,这里是程序员 Android

    经典好文推荐,通过阅读本文,您将收获以下知识点:

    一、 双摄算法简介
    二、 选择feature和配置feature table
    三、挂载算法
    四、 APP调用算法
    五、结语

    一、 双摄算法简介

    双摄算法相比单帧算法和多帧算法要复杂的多。无论是用于夜拍,HDR,还是用于虚化(景深/人像/大光圈)的双摄算法,一般都会需要主、辅两个摄像头的图像同步。并且,由于每一组摄像头模组都有一定的差异,还会开发特定的标定程序,在工厂的产线进行标定。标定程序将标定参数(也就是标定的结果)写入到不易被擦除的分区(如NV分区)中。拍照时,双摄算法根据标定参数修正模组差异。并使用主、辅摄像头的图像进行计算,得出深度、曝光之类的参数。使用得出的深度、曝光之类的参数来调整主摄图像,达到夜拍增强、HDR、背景虚化(景深/人像/大光圈)等等效果。

    对于算法集成来说,一般有两点:

    • 标定程序的集成:包括标定APP以及配置APP的SELinux权限等等。
    • 双摄算法的集成:与单帧算法、多帧算法类似,选择对应的feature,实现对应的plugin,挂载算法。

    由于预置标定APP比较简单,本文就不介绍,关于配置标定APP的SELinux权限,可参考我的另外一篇文章:SELinux权限

    由于我无法提供一个真正的双摄算法,还是和介绍单帧算法集成时类似,提供一个模拟算法库,这个模拟算法库拼接主、辅摄像头的图像,将辅摄图像拼接到主摄图像中间,最终呈现类似画中画的效果。

    二、 选择feature和配置feature table

    2.1 选择feature

    双摄算法是很常见的算法,在MTK已预置一些双摄的feature,总结下大概有以下feature是用于双摄的:
    vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/mtk/mtk_feature_type.h:

        MTK_FEATURE_DEPTH       = 1ULL << 8, 
        MTK_FEATURE_BOKEH       = 1ULL << 9,
        MTK_FEATURE_VSDOF       = (MTK_FEATURE_DEPTH|MTK_FEATURE_BOKEH),
        MTK_FEATURE_DUAL_YUV    = 1ULL << 14,
        MTK_FEATURE_DUAL_HWDEPTH  = 1ULL << 15,
    
    

    其中,MTK_FEATURE_DEPTH和MTK_FEATURE_BOKEH用于双摄虚化,并且计算深度和模糊处理是在两个分开的挂载点进行。

    vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/customer/customer_feature_type.h:

        TP_FEATURE_DEPTH        = 1ULL << 37,
        TP_FEATURE_BOKEH        = 1ULL << 38,
        TP_FEATURE_VSDOF        = (TP_FEATURE_DEPTH|TP_FEATURE_BOKEH),
        TP_FEATURE_FUSION       = 1ULL << 39,
        TP_FEATURE_HDR_DC       = 1ULL << 40,
        TP_FEATURE_DUAL_YUV     = 1ULL << 41,
        TP_FEATURE_DUAL_HWDEPTH = 1ULL << 42,
        TP_FEATURE_PUREBOKEH    = 1ULL << 43,
    
    

    customer部分定义的feature,其中TP_FEATURE_DEPTH和TP_FEATURE_BOKEH也是用于双摄虚化,并且计算深度和模糊处理也是在两个分开的node进行。TP_FEATURE_FUSION和TP_FEATURE_PUREBOKEH是用于双摄虚化的,但是它们将计算深度和虚化处理放在同一个挂载点进行。TP_FEATURE_HDR_DC是用于双摄HDR算法的。

    按MTK的设计意图来看,MTK_FEATURE_DUAL_YUV和TP_FEATURE_DUAL_YUV两个feature应该也是可以用于双摄算法的,但是我没有试过,我一般用TP_FEATURE_FUSION或者TP_FEATURE_PUREBOKEH,有兴趣的童鞋可以自己试一下。

    既然MTK已经预置好了,这一步我们就对号入座,不用再额外添加feature。由于是第三方算法,所以我们选择TP_FEATURE_PUREBOKEH。

    2.2 配置feature table

    上一步,我们选择了TP_FEATURE_PUREBOKEH,MTK很贴心的在vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp中已经定义了一个MTK_FEATURE_COMBINATION_TP_PUREBOKEH。所以定义这一步我们也省了,只需要将MTK_FEATURE_COMBINATION_TP_PUREBOKEH添加到对应的MTK_CAMERA_SCENARIO_CAPTURE_DUALCAM。这里由于我们没有其它双摄算法,将其它两行注释掉。

    diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
    index 38365e0602..7adc2a76db 100755
    --- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
    +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
    @@ -363,8 +363,9 @@ const std::vector<std::unordered_map<int32_t, ScenarioFeatures>>  gMtkScenarioFe
             CAMERA_SCENARIO_END
             //
             CAMERA_SCENARIO_START(MTK_CAMERA_SCENARIO_CAPTURE_DUALCAM)
    -        ADD_CAMERA_FEATURE_SET(MTK_FEATURE_MFNR,   MTK_FEATURE_COMBINATION_TP_VSDOF_MFNR)
    -        ADD_CAMERA_FEATURE_SET(NO_FEATURE_NORMAL,  MTK_FEATURE_COMBINATION_TP_VSDOF)
    +        //ADD_CAMERA_FEATURE_SET(MTK_FEATURE_MFNR,   MTK_FEATURE_COMBINATION_TP_VSDOF_MFNR)
    +        //ADD_CAMERA_FEATURE_SET(NO_FEATURE_NORMAL,  MTK_FEATURE_COMBINATION_TP_VSDOF)
    +        ADD_CAMERA_FEATURE_SET(NO_FEATURE_NORMAL,  MTK_FEATURE_COMBINATION_TP_PUREBOKEH)
             CAMERA_SCENARIO_END
             //
             CAMERA_SCENARIO_START(MTK_CAMERA_SCENARIO_CAPTURE_CSHOT)
    
    

    注意:
    如果是9.0代码,是区分camera id的,feature table的配置要修改openId = 4中的MTK_CAMERA_SCENARIO_CAPTURE_DUALCAM。

    顺带提一下4个摄像头的手机,一般情况下,逻辑camera id的划分:

    • 0:后置主摄
    • 1:前置主摄
    • 2:后置辅摄
    • 3:后置广角
    • 4:双摄0+2同时开。

    市面上的手机已经有5、6个摄像头的。也已经有多组双摄模式的,例如,主摄和辅摄虚化一组,广角加长焦一组,主摄和微距一组。甚至有些手机前摄也有两个摄像头的。而我还没有接触过多个双摄模式的项目,也没有接触过前置双摄的项目,并且每个公司,甚至每个项目可能都会一些差异,所以我这里列举的不一定完整和准确,欢迎了解的童鞋交流补充。

    三、挂载算法

    3.1 为算法选择plugin

    MTK HAL3在vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/plugin/PipelinePluginType.h 中将三方算法的挂载点大致分为以下几类:

    • BokehPlugin: Bokeh算法挂载点,双摄景深算法的虚化部分。
    • DepthPlugin: Depth算法挂载点,双摄景深算法的计算深度部分。
    • FusionPlugin: Depth和Bokeh放在1个算法中,即合并的双摄景深算法挂载点。
    • JoinPlugin: Streaming相关算法挂载点,预览算法都挂载在这里。
    • MultiFramePlugin: 多帧算法挂载点,包括YUV与RAW,例如MFNR/HDR
    • RawPlugin: RAW算法挂载点,例如remosaic
    • YuvPlugin: Yuv单帧算法挂载点,例如美颜、广角镜头畸变校正等。

    对号入座,为要集成的算法选择相应的plugin。这里模拟算法库是在同一个挂载点处理的双摄算法,所以选择FusionPlugin。

    3.2 添加全局宏控

    为了能控制某个项目是否集成此算法,我们在device/mediateksample/[platform]/ProjectConfig.mk中添加一个宏,用于控制新接入算法的编译:

    QXT_DUALCAMERA_SUPPORT = yes
    
    

    当某个项目不需要这个算法时,将device/mediateksample/[platform]/ProjectConfig.mk的QXT_DUALCAMERA_SUPPORT的值设为 no 就可以了。

    3.3 编写算法集成文件

    vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/cp_dualcamera/
    ├── Android.mk
    ├── DualCameraCapture.cpp
    ├── include
    │ └── dual_camera.h
    └── lib
    ├── arm64-v8a
    │ └── libdualcamera.so
    └── armeabi-v7a
    └── libdualcamera.so

    文件说明:

    • Android.mk中配置算法库、头文件、集成的源代码DualCameraCapture.cpp文件,将它们编译成库libmtkcam.plugin.tp_dc,供libmtkcam_3rdparty.customer依赖调用。

    • libdualcamera.so可以将主、辅摄图像拼接成一张画中画效果的图,libdualcamera.so用来模拟需要接入的第三方双摄算法库。dual_camera.h是头文件。

    • DualCameraCapture.cpp是集成的源代码CPP文件。

    3.3.1 mtkcam3/3rdparty/customer/cp_dualcamera/Android.mk
    ifeq ($(QXT_DUALCAMERA_SUPPORT),yes)
    
    LOCAL_PATH := $(call my-dir)
    
    include $(CLEAR_VARS)
    LOCAL_MODULE := libdualcamera
    LOCAL_SRC_FILES_32 := lib/armeabi-v7a/libdualcamera.so
    LOCAL_SRC_FILES_64 := lib/arm64-v8a/libdualcamera.so
    LOCAL_MODULE_TAGS := optional
    LOCAL_MODULE_CLASS := SHARED_LIBRARIES
    LOCAL_MODULE_SUFFIX := .so
    LOCAL_PROPRIETARY_MODULE := true
    LOCAL_MULTILIB := both
    include $(BUILD_PREBUILT)
    ################################################################################
    #
    ################################################################################
    include $(CLEAR_VARS)
    
    #-----------------------------------------------------------
    -include $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam/mtkcam.mk
    
    #-----------------------------------------------------------
    LOCAL_SRC_FILES += DualCameraCapture.cpp
    
    #-----------------------------------------------------------
    LOCAL_C_INCLUDES += $(MTKCAM_C_INCLUDES)
    LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/include $(MTK_PATH_SOURCE)/hardware/mtkcam/include
    LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_COMMON)/hal/inc
    LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_CUSTOM_PLATFORM)/hal/inc
    #
    LOCAL_C_INCLUDES += system/media/camera/include
    LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/3rdparty/customer/cp_dualcamera/include
    
    #-----------------------------------------------------------
    LOCAL_CFLAGS += $(MTKCAM_CFLAGS)
    #
    
    #-----------------------------------------------------------
    LOCAL_STATIC_LIBRARIES +=
    #
    LOCAL_WHOLE_STATIC_LIBRARIES +=
    
    #-----------------------------------------------------------
    LOCAL_SHARED_LIBRARIES += liblog
    LOCAL_SHARED_LIBRARIES += libutils
    LOCAL_SHARED_LIBRARIES += libcutils
    LOCAL_SHARED_LIBRARIES += libmtkcam_metadata
    LOCAL_SHARED_LIBRARIES += libmtkcam_imgbuf
    #LOCAL_SHARED_LIBRARIES += libmtkcam_3rdparty
    
    #-----------------------------------------------------------
    LOCAL_HEADER_LIBRARIES := libutils_headers liblog_headers libhardware_headers
    
    #-----------------------------------------------------------
    LOCAL_MODULE := libmtkcam.plugin.tp_dc
    LOCAL_PROPRIETARY_MODULE := true
    LOCAL_MODULE_OWNER := mtk
    LOCAL_MODULE_TAGS := optional
    include $(MTK_STATIC_LIBRARY)
    
    ################################################################################
    #
    ################################################################################
    include $(call all-makefiles-under,$(LOCAL_PATH))
    endif
    
    
    3.3.2 mtkcam3/3rdparty/customer/cp_dualcamera/include/dual_camera.h
    #ifndef QXT_DUAL_CAMERA_H
    #define QXT_DUAL_CAMERA_H
    
    typedef unsigned char uchar;
    
    #define CENTER 0
    #define LEFT_TOP 1
    #define LEFT_BOTTOM 2
    #define RIGHT_TOP 3
    #define RIGHT_BOTTOM 4
    
    class DualCamera {
    
    public:
        DualCamera();
    
        ~DualCamera();
    
        void processI420(uchar *main, int mainWidth, int mainHeight,
                                uchar *sub, int subWidth, int subHeight);
    
        void processI420(uchar *mainY, uchar *mainU, uchar *mainV, int mainWidth, int mainHeight,
                                uchar *subY, uchar *subU, uchar *subV, int subWidth, int subHeight);
    
        void processNV21(uchar *main, int mainWidth, int mainHeight,
                                uchar *sub, int subWidth, int subHeight);
    
        void processNV21(uchar *mainY, uchar *mainUV, int mainWidth, int mainHeight,
                                uchar *subY, uchar *subUV, int subWidth, int subHeight);
    
    private:
        int position;
    };
    
    #endif //QXT_DUAL_CAMERA_H
    
    

    头文件中的接口函数介绍:

    • DualCamera: 构造函数,构造函数中会模拟读取标定参数文件,这里模拟的标定参数文件内容只是一个数字,用于指定副摄图像的坐标位置。
    • processI420:用于将主副摄图像拼接成画中画,输入和输出图像必须是I420格式。
    • processNV21:用于将主副摄图像拼接成画中画,输入和输出图像必须是NV21格式。
    • ~DualCamera(): 析构函数,没有实际作用。

    为了方便有兴趣的童鞋们,实现代码dual_camera.cpp也一并贴上:

    #include <cstring>
    #include <cstdio>
    #include "dual_camera.h"
    #include "logger.h"
    
    using namespace std;
    
    DualCamera::DualCamera() {
        const char * path = "/vendor/persist/camera/calibration.cfg";
        FILE *fp;
        if ((fp = fopen(path, "r")) != nullptr) {
            auto buffer = new int[1];
            fread(buffer, 1, sizeof(int), fp);
            position = buffer[0];
        } else {
            LOGE("Failed to open: %s", path);
            position = CENTER;
        }
    }
    
    DualCamera::~DualCamera() = default;
    
    void DualCamera::processI420(uchar *main, int mainWidth, int mainHeight,
                                     uchar *sub, int subWidth, int subHeight) {
        uchar *mainY = main;
        uchar *mainU = main + mainWidth * mainHeight;
        uchar *mainV = main + mainWidth * mainHeight * 5 / 4;
        uchar *subY = sub;
        uchar *subU = sub + subWidth * subHeight;
        uchar *subV = sub + subWidth * subHeight * 5 / 4;
        processI420(mainY, mainU, mainV, mainWidth, mainHeight, subY, subU, subV, subWidth, subHeight);
    }
    
    void
    DualCamera::processI420(uchar *mainY, uchar *mainU, uchar *mainV, int mainWidth, int mainHeight,
                                uchar *subY, uchar *subU, uchar *subV, int subWidth, int subHeight) {
        int mainUVHeight = mainHeight / 2;
        int mainUVWidth = mainWidth / 2;
    
        int subUVHeight = subHeight / 2;
        int subUVWidth = subWidth / 2;
    
        //merge
        unsigned char *pDstY;
        unsigned char *pSrcY;
    
        for (int i = 0; i < subHeight; i++) {
            pSrcY = subY + i * subWidth;
            if (position == LEFT_TOP) {
                pDstY = mainY + i * mainWidth;
            } else if (position == LEFT_BOTTOM) {
                pDstY = mainY + i * mainWidth + ((mainHeight - subHeight) * mainWidth);
            } else if (position == RIGHT_TOP) {
                pDstY = mainY + i * mainWidth + (mainWidth - subWidth);
            } else if (position == RIGHT_BOTTOM) {
                pDstY = mainY + i * mainWidth + ((mainHeight - subHeight) * mainWidth) +
                        (mainWidth - subWidth);
            } else if (position == CENTER) {
                pDstY = mainY + i * mainWidth + ((mainHeight - subHeight) / 2 * mainWidth) +
                        (mainWidth - subWidth) / 2;
            } else {
                LOGE("Unsupported position: %d", position);
                return;
            }
            memcpy(pDstY, pSrcY, subWidth);
        }
    
        unsigned char *pDstU;
        unsigned char *pDstV;
        unsigned char *pSrcU;
        unsigned char *pSrcV;
        for (int i = 0; i < subUVHeight; i++) {
            pSrcU = subU + i * subUVWidth;
            pSrcV = subV + i * subUVWidth;
            if (position == LEFT_TOP) {
                pDstU = mainU + i * mainUVWidth;
                pDstV = mainV + i * mainUVWidth;
            } else if (position == LEFT_BOTTOM) {
                pDstU = mainU + ((mainUVHeight - subUVHeight) * mainUVWidth) + i * mainUVWidth;
                pDstV = mainV + ((mainUVHeight - subUVHeight) * mainUVWidth) + i * mainUVWidth;
            } else if (position == RIGHT_TOP) {
                pDstU = mainU + i * mainUVWidth + mainUVWidth - subUVWidth;
                pDstV = mainV + i * mainUVWidth + mainUVWidth - subUVWidth;
            } else if (position == RIGHT_BOTTOM) {
                pDstU = mainU + ((mainUVHeight - subUVHeight) * mainUVWidth) +
                        i * mainUVWidth + (mainUVWidth - subUVWidth);
                pDstV = mainV + ((mainUVHeight - subUVHeight) * mainUVWidth) +
                        i * mainUVWidth + (mainUVWidth - subUVWidth);
            } else if (position == CENTER) {
                pDstU = mainU + ((mainUVHeight - subUVHeight) / 2 * mainUVWidth) +
                        i * mainUVWidth + (mainUVWidth - subUVWidth) / 2;
                pDstV = mainV + ((mainUVHeight - subUVHeight) / 2 * mainUVWidth) +
                        i * mainUVWidth + (mainUVWidth - subUVWidth) / 2;
            } else {
                LOGE("Unsupported position: %d", position);
                return;
            }
            memcpy(pDstU, pSrcU, subUVWidth);
            memcpy(pDstV, pSrcV, subUVWidth);
    
        }
    }
    
    void DualCamera::processNV21(uchar *main, int mainWidth, int mainHeight,
                     uchar *sub, int subWidth, int subHeight) {
        uchar *mainY = main;
        uchar *mainUV = main + mainWidth * mainHeight;
        uchar *subY = sub;
        uchar *subUV = sub + subWidth * subHeight;
        processNV21(mainY, mainUV, mainWidth, mainHeight, subY, subUV, subWidth, subHeight);
    }
    
    void DualCamera::processNV21(uchar *mainY, uchar *mainUV, int mainWidth, int mainHeight,
                     uchar *subY, uchar *subUV, int subWidth, int subHeight) {
        LOGD("[processNV21] mainY:%p, mainUV:%p, mainWidth:%d, mainHeight:%d, subY:%p, subUV:%p, subWidth:%d, subHeight:%d, position:%d",
                mainY, mainUV, mainWidth, mainHeight, subY, subUV, subWidth, subHeight, position);
        int mainUVHeight = mainHeight / 2;
        int mainUVWidth = mainWidth / 2;
        unsigned char *pDstY;
        unsigned char *pSrcY;
    
        for (int i = 0; i < subHeight; i++) {
            pSrcY = subY + i * subWidth;
            if (position == LEFT_TOP) {
                pDstY = mainY + i * mainWidth;
            } else if (position == LEFT_BOTTOM) {
                pDstY = mainY + i * mainWidth + ((mainHeight - subHeight) * mainWidth);
            } else if (position == RIGHT_TOP) {
                pDstY = mainY + i * mainWidth + (mainWidth - subWidth);
            } else if (position == RIGHT_BOTTOM) {
                pDstY = mainY + i * mainWidth + ((mainHeight - subHeight) * mainWidth) +
                        (mainWidth - subWidth);
            } else if (position == CENTER) {
                pDstY = mainY + i * mainWidth + ((mainHeight - subHeight) / 2 * mainWidth) +
                        (mainWidth - subWidth) / 2;
            } else {
                LOGE("Unsupported position: %d", position);
                return;
            }
            memcpy(pDstY, pSrcY, subWidth);
        }
    
        int subUVHeight = subHeight / 2;
        int subUVWidth = subWidth / 2;
        unsigned char *pDstUV;
        unsigned char *pSrcUV;
        for (int i = 0; i < subUVHeight; i++) {
            pSrcUV = subUV + i * subUVWidth * 2;
            if (position == LEFT_TOP) {
                pDstUV = mainUV + i * mainUVWidth * 2;
            } else if (position == LEFT_BOTTOM) {
                pDstUV = mainUV + ((mainUVHeight - subUVHeight) * mainUVWidth + i * mainUVWidth) * 2;
            } else if (position == RIGHT_TOP) {
                pDstUV = mainUV + (i * mainUVWidth + mainUVWidth - subUVWidth) * 2;
            } else if (position == RIGHT_BOTTOM)  {
                pDstUV = mainUV + ((mainUVHeight - subUVHeight) * mainUVWidth +
                        i * mainUVWidth + mainUVWidth - subUVWidth) * 2;
            } else if (position == CENTER) {
                pDstUV = mainUV + ((mainUVHeight - subUVHeight) / 2 * mainUVWidth +
                        i * mainUVWidth + (mainUVWidth - subUVWidth) / 2) * 2;
            } else {
                LOGE("Unsupported position: %d", position);
                return;
            }
            memcpy(pDstUV, pSrcUV, subUVWidth * 2);
        }
    }
    
    
    3.3.3 mtkcam3/3rdparty/customer/cp_dualcamera/DualCameraCapture.cpp
    #define LOG_TAG "DualCamera"
    
    // Standard C header file
    #include <stdlib.h>
    #include <chrono>
    #include <random>
    #include <thread>
    // Android system/core header file
    
    // mtkcam custom header file
    
    // mtkcam global header file
    #include <mtkcam/utils/std/Log.h>
    // Module header file
    #include <mtkcam/drv/iopipe/SImager/IImageTransform.h>
    #include <mtkcam/utils/metastore/IMetadataProvider.h>
    #include <mtkcam3/3rdparty/plugin/PipelinePlugin.h>
    #include <mtkcam3/3rdparty/plugin/PipelinePluginType.h>
    //
    #include <mtkcam/utils/metadata/client/mtk_metadata_tag.h>
    #include <mtkcam/utils/metadata/hal/mtk_platform_metadata_tag.h>
    // Local header file
    #include <dual_camera.h>
    
    using namespace NSCam;
    using namespace android;
    using namespace std;
    using namespace NSCam::NSPipelinePlugin;
    /******************************************************************************
     *
     ******************************************************************************/
    #define MY_LOGV(fmt, arg...)        CAM_LOGV("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
    #define MY_LOGD(fmt, arg...)        CAM_LOGD("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
    #define MY_LOGI(fmt, arg...)        CAM_LOGI("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
    #define MY_LOGW(fmt, arg...)        CAM_LOGW("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
    #define MY_LOGE(fmt, arg...)        CAM_LOGE("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
    //
    #define MY_LOGV_IF(cond, ...)       do { if( (cond) ) { MY_LOGV(__VA_ARGS__); } }while(0)
    #define MY_LOGD_IF(cond, ...)       do { if( (cond) ) { MY_LOGD(__VA_ARGS__); } }while(0)
    #define MY_LOGI_IF(cond, ...)       do { if( (cond) ) { MY_LOGI(__VA_ARGS__); } }while(0)
    #define MY_LOGW_IF(cond, ...)       do { if( (cond) ) { MY_LOGW(__VA_ARGS__); } }while(0)
    #define MY_LOGE_IF(cond, ...)       do { if( (cond) ) { MY_LOGE(__VA_ARGS__); } }while(0)
    
    /*******************************************************************************
    * MACRO Utilities Define.
    ********************************************************************************/
    namespace { // anonymous namespace for debug MARCO function
    using AutoObject = std::unique_ptr<const char, std::function<void(const char*)>>;
    //
    auto
    createAutoScoper(const char* funcName) -> AutoObject
    {
        CAM_LOGD("[%s] +", funcName);
        return AutoObject(funcName, [](const char* p)
        {
            CAM_LOGD("[%s] -", p);
        });
    }
    #define SCOPED_TRACER() auto scoped_tracer = ::createAutoScoper(__FUNCTION__)
    //
    auto
    createAutoTimer(const char* funcName, const char* text) -> AutoObject
    {
        using Timing = std::chrono::time_point<std::chrono::high_resolution_clock>;
        using DuationTime = std::chrono::duration<float, std::milli>;
    
        Timing startTime = std::chrono::high_resolution_clock::now();
        return AutoObject(text, [funcName, startTime](const char* p)
        {
            Timing endTime = std::chrono::high_resolution_clock::now();
            DuationTime duationTime = endTime - startTime;
            CAM_LOGD("[%s] %s, elapsed(ms):%.4f",funcName, p, duationTime.count());
        });
    }
    #define AUTO_TIMER(TEXT) auto auto_timer = ::createAutoTimer(__FUNCTION__, TEXT)
    //
    #define UNREFERENCED_PARAMETER(param) (param)
    //
    } // end anonymous namespace for debug MARCO function
    
    /*******************************************************************************
    * Alias.
    ********************************************************************************/
    using namespace NSCam;
    using namespace NSCam::NSPipelinePlugin;
    using namespace NSCam::NSIoPipe::NSSImager;
    
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    //  Type Alias..
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    using Property = FusionPlugin::Property;
    using Selection = FusionPlugin::Selection;
    using RequestPtr = FusionPlugin::Request::Ptr;
    using RequestCallbackPtr = FusionPlugin::RequestCallback::Ptr;
    //
    template<typename T>
    using AutoPtr             = std::unique_ptr<T, std::function<void(T*)>>;
    //
    using ImgPtr              = AutoPtr<IImageBuffer>;
    using MetaPtr             = AutoPtr<IMetadata>;
    using ImageTransformPtr   = AutoPtr<IImageTransform>;
    
    /*******************************************************************************
    * Namespace Start.
    ********************************************************************************/
    namespace { // anonymous namespace
    
    /*******************************************************************************
    * Class Definition
    ********************************************************************************/
    /**
     * @brief third party pure bokeh algo. provider
     */
    class DualCameraCapture final: public FusionPlugin::IProvider
    {
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    //  Instantiation.
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    public:
        DualCameraCapture();
    
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    //  FusionPlugin::IProvider Public Operations.
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    public:
        void set(MINT32 iOpenId, MINT32 iOpenId2) override;
    
        const Property& property() override;
    
        MERROR negotiate(Selection& sel) override;
    
        void init() override;
    
        MERROR process(RequestPtr requestPtr, RequestCallbackPtr callbackPtr) override;
    
        void abort(vector<RequestPtr>& requestPtrs) override;
    
        void uninit() override;
    
        ~DualCameraCapture();
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    //  DualCameraCapture Private Operator.
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    private:
        MERROR processDone(const RequestPtr& requestPtr, const RequestCallbackPtr& callbackPtr, MERROR status);
    
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    //  DualCameraCapture Private Data Members.
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    private:
        MINT32 mEnable;
        //
        MINT32 mOpenId;
        MINT32 mOpenId2;
        MINT32 mDump;
        DualCamera* mDualCamera = NULL;
    };
    REGISTER_PLUGIN_PROVIDER(Fusion, DualCameraCapture);
    
    /**
     * @brief utility class
     */
    class DualCameraUtility final
    {
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    //  Instantiation.
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    public:
        DualCameraUtility() = delete;
    
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    //  DualCameraUtility Public Operations.
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    public:
        static inline ImageTransformPtr createImageTransformPtr();
    
        static inline ImgPtr createImgPtr(BufferHandle::Ptr& hangle);
    
        static inline MetaPtr createMetaPtr(MetadataHandle::Ptr& hangle);
    
        static inline MVOID dump(const IImageBuffer* pImgBuf, const std::string& dumpName);
    
        static inline MVOID dump(IMetadata* pMetaData, const std::string& dumpName);
    
        static inline const char * format2String(MINT format);
    
        static inline MVOID saveImg(NSCam::IImageBuffer* pImgBuf, const std::string& fileName);
    };
    
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    //  DualCameraUtility implementation.
    //+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    ImageTransformPtr
    DualCameraUtility::
    createImageTransformPtr()
    {
        return ImageTransformPtr(IImageTransform::createInstance(), [](IImageTransform *p)
        {
            p->destroyInstance();
        });
    }
    
    ImgPtr
    DualCameraUtility::
    createImgPtr(BufferHandle::Ptr& hangle)
    {
        return ImgPtr(hangle->acquire(), [hangle](IImageBuffer* p)
        {
            UNREFERENCED_PARAMETER(p);
            hangle->release();
        });
    };
    
    MetaPtr
    DualCameraUtility::
    createMetaPtr(MetadataHandle::Ptr& hangle)
    {
        return MetaPtr(hangle->acquire(), [hangle](IMetadata* p)
        {
            UNREFERENCED_PARAMETER(p);
            hangle->release();
        });
    };
    
    MVOID
    DualCameraUtility::
    dump(const IImageBuffer* pImgBuf, const std::string& dumpName)
    {
        MY_LOGD("dump image info, dumpName:%s, info:[a:%p, si:%dx%d, st:%zu, f:0x%x, va:%p]",
            dumpName.c_str(), pImgBuf,
            pImgBuf->getImgSize().w, pImgBuf->getImgSize().h,
            pImgBuf->getBufStridesInBytes(0),
            pImgBuf->getImgFormat(),
            reinterpret_cast<void*>(pImgBuf->getBufVA(0)));
    }
    
    MVOID
    DualCameraUtility::
    dump(IMetadata* pMetaData, const std::string& dumpName)
    {
        MY_LOGD("dump meta info, dumpName:%s, addr::%p, count:%u",
            dumpName.c_str(), pMetaData, pMetaData->count());
    }
    
    MVOID
    DualCameraUtility::
    saveImg(NSCam::IImageBuffer* pImgBuf, const std::string& fileName)
    {
    
        char path[256];
        snprintf(path, sizeof(path), "/data/vendor/camera_dump/%s_%zu_%dx%d.%s", fileName.c_str(), pImgBuf->getBufStridesInBytes(0),
                 pImgBuf->getImgSize().w, pImgBuf->getImgSize().h, format2String(pImgBuf->getImgFormat()));
        pImgBuf->saveToFile(path);
    }
    
    const char* 
    DualCameraUtility::
    format2String(MINT format) {
        switch(format) {
           case NSCam::eImgFmt_RGBA8888:          return "rgba";
           case NSCam::eImgFmt_RGB888:            return "rgb";
           case NSCam::eImgFmt_RGB565:            return "rgb565";
           case NSCam::eImgFmt_STA_BYTE:          return "byte";
           case NSCam::eImgFmt_YVYU:              return "yvyu";
           case NSCam::eImgFmt_UYVY:              return "uyvy";
           case NSCam::eImgFmt_VYUY:              return "vyuy";
           case NSCam::eImgFmt_YUY2:              return "yuy2";
           case NSCam::eImgFmt_YV12:              return "yv12";
           case NSCam::eImgFmt_YV16:              return "yv16";
           case NSCam::eImgFmt_NV16:              return "nv16";
           case NSCam::eImgFmt_NV61:              return "nv61";
           case NSCam::eImgFmt_NV12:              return "nv12";
           case NSCam::eImgFmt_NV21:              return "nv21";
           case NSCam::eImgFmt_I420:              return "i420";
           case NSCam::eImgFmt_I422:              return "i422";
           case NSCam::eImgFmt_Y800:              return "y800";
           case NSCam::eImgFmt_BAYER8:            return "bayer8";
           case NSCam::eImgFmt_BAYER10:           return "bayer10";
           case NSCam::eImgFmt_BAYER12:           return "bayer12";
           case NSCam::eImgFmt_BAYER14:           return "bayer14";
           case NSCam::eImgFmt_FG_BAYER8:         return "fg_bayer8";
           case NSCam::eImgFmt_FG_BAYER10:        return "fg_bayer10";
           case NSCam::eImgFmt_FG_BAYER12:        return "fg_bayer12";
           case NSCam::eImgFmt_FG_BAYER14:        return "fg_bayer14";
           default:                               return "unknown";
        }
    }
    
    //++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    //  ThirdPartyFusionProvider implementation.
    //+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    DualCameraCapture::
    DualCameraCapture()
    : mEnable(-1)
    , mOpenId(-1)
    , mOpenId2(-1)
    , mDump(-1)
    {
        // on:1/off:0/auto:-1
        mEnable = ::property_get_int32("vendor.debug.camera.dualcamera.enable", mEnable);
        mDump = ::property_get_int32("vendor.debug.camera.dualcamera.dump", mDump);
        mDualCamera = new DualCamera();
        MY_LOGD("ctor:%p, mEnable:%d", this, mEnable);
    }
    
    void
    DualCameraCapture::
    set(MINT32 iOpenId, MINT32 iOpenId2)
    {
        mOpenId = iOpenId;
        mOpenId2 = iOpenId2;
        MY_LOGD("set openId:%d openId2:%d", mOpenId, mOpenId2);
    }
    
    const Property&
    DualCameraCapture::
    property()
    {
        static const Property prop = []() -> const Property
        {
            Property ret;
            ret.mName = "DualCamera";
            ret.mFeatures = TP_FEATURE_PUREBOKEH;
            ret.mFaceData = eFD_Cache;
            ret.mBoost = eBoost_CPU;
            ret.mInitPhase = ePhase_OnPipeInit;
            return ret;
        }();
        return prop;
    }
    
    MERROR
    DualCameraCapture::
    negotiate(Selection& sel)
    {
        SCOPED_TRACER();
    
        if( mEnable == 0 )
        {
            MY_LOGD("force off tp dual camera");
            return BAD_VALUE;
        }
        // INPUT
        {
            sel.mIBufferFull
                .setRequired(MTRUE)
                .addAcceptedFormat(eImgFmt_NV21)
                .addAcceptedSize(eImgSize_Full);
    
            sel.mIBufferFull2
                .setRequired(MTRUE)
                .addAcceptedFormat(eImgFmt_NV21)
                .addAcceptedSize(eImgSize_Full);
    
            sel.mIMetadataApp.setRequired(MTRUE);
            sel.mIMetadataHal.setRequired(MTRUE);
            sel.mIMetadataHal2.setRequired(MTRUE);
            sel.mIMetadataDynamic.setRequired(MTRUE);
            sel.mIMetadataDynamic2.setRequired(MTRUE);
        }
        // OUTPUT
        {
            sel.mOBufferFull
                .setRequired(MTRUE)
                .addAcceptedFormat(eImgFmt_NV21)
                .addAcceptedSize(eImgSize_Full);
    
            sel.mOMetadataApp.setRequired(MTRUE);
            sel.mOMetadataHal.setRequired(MTRUE);
        }
        return OK;
    }
    
    void
    DualCameraCapture::
    init()
    {
        SCOPED_TRACER();
        ::srand(time(nullptr));
    }
    
    MERROR
    DualCameraCapture::
    process(RequestPtr requestPtr, RequestCallbackPtr callbackPtr)
    {
        SCOPED_TRACER();
    
        auto isValidInput = [](const RequestPtr& requestPtr) -> MBOOL
        {
            const MBOOL ret = requestPtr->mIBufferFull != nullptr
                        && requestPtr->mIBufferFull2 != nullptr
                        && requestPtr->mIMetadataApp != nullptr
                        && requestPtr->mIMetadataHal != nullptr
                        && requestPtr->mIMetadataHal2 != nullptr;
            if( !ret )
            {
                MY_LOGE("invalid request with input, req:%p, inFullImg:%p, inFullImg2:%p, inAppMeta:%p, inHalMeta:%p, inHalMeta2:%p",
                    requestPtr.get(),
                    requestPtr->mIBufferFull.get(),
                    requestPtr->mIBufferFull2.get(),
                    requestPtr->mIMetadataApp.get(),
                    requestPtr->mIMetadataHal.get(),
                    requestPtr->mIMetadataHal2.get());
            }
            return ret;
        };
    
        auto isValidOutput = [](const RequestPtr& requestPtr) -> MBOOL
        {
            const MBOOL ret = requestPtr->mOBufferFull != nullptr
                        && requestPtr->mOMetadataApp != nullptr
                        && requestPtr->mOMetadataHal != nullptr;
            if( !ret )
            {
                MY_LOGE("invalid request with input, req:%p, outFullImg:%p, outAppMeta:%p, outHalMeta:%p",
                    requestPtr.get(),
                    requestPtr->mOBufferFull.get(),
                    requestPtr->mOMetadataApp.get(),
                    requestPtr->mOMetadataHal.get());
            }
            return ret;
        };
    
        MY_LOGD("process, reqAdrr:%p", requestPtr.get());
    
        if( !isValidInput(requestPtr) )
        {
            return processDone(requestPtr, callbackPtr, BAD_VALUE);
        }
    
        if( !isValidOutput(requestPtr) )
        {
            return processDone(requestPtr, callbackPtr, BAD_VALUE);
        }
        //
        //
        {
            // note: we can just call createXXXXPtr one time for a specified handle
            ImgPtr inMainImgPtr = DualCameraUtility::createImgPtr(requestPtr->mIBufferFull);
            ImgPtr inSubImgPtr = DualCameraUtility::createImgPtr(requestPtr->mIBufferFull2);
            ImgPtr outFSImgPtr = DualCameraUtility::createImgPtr(requestPtr->mOBufferFull);
            //
            MetaPtr inAppMetaPtr = DualCameraUtility::createMetaPtr(requestPtr->mIMetadataApp);
            MetaPtr inMainHalMetaPtr = DualCameraUtility::createMetaPtr(requestPtr->mIMetadataHal);
            MetaPtr inSubHalMetaPtr = DualCameraUtility::createMetaPtr(requestPtr->mIMetadataHal2);
            MetaPtr outAppMetaPtr = DualCameraUtility::createMetaPtr(requestPtr->mOMetadataApp);
            MetaPtr outHalMetaPtr = DualCameraUtility::createMetaPtr(requestPtr->mOMetadataHal);
            // dump info
            {
                DualCameraUtility::dump(inMainImgPtr.get(), "inputMainImg");
                DualCameraUtility::dump(inSubImgPtr.get(), "inputSubImg");
                DualCameraUtility::dump(outFSImgPtr.get(), "outFSImg");
                //
                DualCameraUtility::dump(inAppMetaPtr.get(), "inAppMeta");
                DualCameraUtility::dump(inMainHalMetaPtr.get(), "inMainHalMeta");
                DualCameraUtility::dump(inSubHalMetaPtr.get(), "inSubHalMeta");
                DualCameraUtility::dump(outAppMetaPtr.get(), "outAppMeta");
                DualCameraUtility::dump(outHalMetaPtr.get(), "outHalMeta");
            }
    
            //dual camera algo
            {
                AUTO_TIMER("proces dual camera algo.");
                NSCam::IImageBuffer* inMainImgBuf = inMainImgPtr.get();
                NSCam::IImageBuffer* inSubImgBuf = inSubImgPtr.get();
                NSCam::IImageBuffer* outImgBuf = outFSImgPtr.get();
                if (mDump) {
                    DualCameraUtility::saveImg(inMainImgBuf, "inputMainImg");
                    DualCameraUtility::saveImg(inSubImgBuf, "inputSubImg");
                }
                memcpy(reinterpret_cast<uchar*>(outImgBuf->getBufVA(0)), reinterpret_cast<uchar*>(inMainImgBuf->getBufVA(0)), inMainImgBuf->getBufSizeInBytes(0));
                memcpy(reinterpret_cast<uchar*>(outImgBuf->getBufVA(1)), reinterpret_cast<uchar*>(inMainImgBuf->getBufVA(1)), inMainImgBuf->getBufSizeInBytes(1));
                if (mDualCamera != NULL) {
                    mDualCamera->processNV21(reinterpret_cast<uchar*>(outImgBuf->getBufVA(0)), reinterpret_cast<uchar*>(outImgBuf->getBufVA(1)),
                                             outImgBuf->getImgSize().w, outImgBuf->getImgSize().h,
                                             reinterpret_cast<uchar*>(inSubImgBuf->getBufVA(0)), reinterpret_cast<uchar*>(inSubImgBuf->getBufVA(1)),
                                             inSubImgBuf->getImgSize().w, inSubImgBuf->getImgSize().h);
                }
            }
        }
        return processDone(requestPtr, callbackPtr, OK);
    }
    
    MERROR
    DualCameraCapture::
    processDone(const RequestPtr& requestPtr, const RequestCallbackPtr& callbackPtr, MERROR status)
    {
        SCOPED_TRACER();
    
        MY_LOGD("process done, call complete, reqAddr:%p, callbackPtr:%p, status:%d",
            requestPtr.get(), callbackPtr.get(), status);
    
        if( callbackPtr != nullptr )
        {
            callbackPtr->onCompleted(requestPtr, status);
        }
        return OK;
    }
    
    void
    DualCameraCapture::
    abort(vector<RequestPtr>& requestPtrs)
    {
        SCOPED_TRACER();
    
        for(auto& item : requestPtrs)
        {
            MY_LOGD("abort request, reqAddr:%p", item.get());
        }
    }
    
    void
    DualCameraCapture::
    uninit()
    {
        SCOPED_TRACER();
    }
    
    DualCameraCapture::
    ~DualCameraCapture()
    {
        MY_LOGD("dtor:%p", this);
        if (mDualCamera != NULL) {
            delete mDualCamera;
            mDualCamera = NULL;
        }
    }
    
    }  // anonymous namespace
    
    

    主要函数介绍:

    • 在property函数中feature类型设置成TP_FEATURE_PUREBOKEH,并设置名称等属性。

    • 在negotiate函数中配置算法需要的输入、输出图像的格式、尺寸。注意,双摄算法有2个输入Buffer,但是只有1个输出Buffer。

    • 在process函数中接入算法。调用算法接口函数processNV21进行处理。

    集成时,可以参照MTK提供的实例文件TPPureBokehImpl.cpp或者TPFusionImpl.cpp。

    3.3.4 mtkcam3/3rdparty/customer/Android.mk

    最终vendor.img需要的目标共享库是libmtkcam_3rdparty.customer.so。因此,我们还需要修改Android.mk,使模块libmtkcam_3rdparty.customer依赖libmtkcam.plugin.tp_dc。

    同时,为了避免冲突以及出图更快,我们还需要移除MTK示例的libmtkcam.plugin.tp_purebokeh。

    diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
    index 5e5dd6524f..bf2f6ffeae 100755
    --- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
    +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
    @@ -65,7 +65,7 @@ LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_bokeh
     LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_depth
     LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_fusion
     LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_dc_hdr
    -LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_purebokeh
    +#LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_purebokeh
     #
     LOCAL_SHARED_LIBRARIES += libcam.iopipe
     LOCAL_SHARED_LIBRARIES += libmtkcam_modulehelper
    @@ -83,6 +83,11 @@ LOCAL_SHARED_LIBRARIES += libyuv.vendor
     LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_mfnr
     endif
    
    +ifeq ($(QXT_DUALCAMERA_SUPPORT), yes)
    +LOCAL_SHARED_LIBRARIES += libdualcamera
    +LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_dc
    +endif
    +
    
    

    由于MTK已经定义了相关的metadata,因此,我们也无需再自定义metadata。

    前面这些步骤完成之后,集成工作就基本完成了。我们需要重新编译一下系统源码,为节约时间,可以只编译vendor.img。

    四、 APP调用算法

    由于MTK原生的Camera APP本身就有双摄stereo模式,我们也无需再写APP来验证算法。为样机刷入系统整包或者vendor.img,开机后,进入MTK 原生Camera APP的stereo模式。我们来拍一张看看效果:

    image

    辅摄的色彩效果似乎有些异常,但是不管怎样,模拟算法库是运行正常的,已经将主、辅摄图像拼成画中画效果了。

    五、结语

    双摄算法是所有算法中最复杂的,涉及到标定、主副摄同步、深度计算、模糊调优、边缘处理等等。算法和集成两部分只要出一点点小问题,双摄的效果可能会天差地别。集成双摄算法时,请一定仔细,仔细,再仔细。

    MTK HAL算法集成系列的三篇文章到这里就收官了。农历2020年马上要结束了,这应该也是我农历2020年最后一篇文章了。也临近放假了,提前祝大家假期愉快!

    原文链接:https://www.jianshu.com/p/d4a2aacc1760

    至此,本篇已结束。转载网络的文章,小编觉得很优秀,欢迎点击阅读原文,支持原创作者,如有侵权,恳请联系小编删除,欢迎您的建议与指正。同时期待您的关注,感谢您的阅读,谢谢!

    相关文章

      网友评论

          本文标题:MTK 双摄算法集成

          本文链接:https://www.haomeiwen.com/subject/mspzultx.html