美文网首页
Android NDK开发:Opencv实现人脸识别

Android NDK开发:Opencv实现人脸识别

作者: itfitness | 来源:发表于2022-12-18 09:58 被阅读0次

    目录

    效果展示

    实现步骤

    1.CameraX实现相机预览

    添加依赖

       // CameraX core library using camera2 implementation
        implementation "androidx.camera:camera-camera2:1.0.1"
        // CameraX Lifecycle Library
        implementation "androidx.camera:camera-lifecycle:1.0.1"
        // CameraX View class
        implementation "androidx.camera:camera-view:1.0.0-alpha27"
    

    权限

    <uses-permission android:name="android.permission.CAMERA"/>
    

    布局代码

    <?xml version="1.0" encoding="utf-8"?>
    <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
        xmlns:app="http://schemas.android.com/apk/res-auto"
        xmlns:tools="http://schemas.android.com/tools"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        tools:context=".MainActivity">
        <androidx.camera.view.PreviewView
            android:id="@+id/viewFinder"
            android:layout_width="match_parent"
            android:layout_height="match_parent" />
    </FrameLayout>
    

    Activity代码(使用了CameraX官方案例的代码)

    class MainActivity : AppCompatActivity(), ImageAnalysis.Analyzer {
        private lateinit var cameraExecutor: ExecutorService
        private lateinit var viewFinder:PreviewView
        override fun onCreate(savedInstanceState: Bundle?) {
            super.onCreate(savedInstanceState)
            setContentView(R.layout.activity_main)
    
            viewFinder = findViewById(R.id.viewFinder)
            initCamera()
        }
    
        private fun initCamera(){
            cameraExecutor = Executors.newSingleThreadExecutor()
            // Request camera permissions
            if (allPermissionsGranted()) {
                startCamera()
            } else {
                ActivityCompat.requestPermissions(
                    this, REQUIRED_PERMISSIONS, REQUEST_CODE_PERMISSIONS
                )
            }
        }
    
        private fun allPermissionsGranted() = REQUIRED_PERMISSIONS.all {
            ContextCompat.checkSelfPermission(
                baseContext, it
            ) == PackageManager.PERMISSION_GRANTED
        }
    
        override fun onRequestPermissionsResult(
            requestCode: Int, permissions: Array<String>, grantResults:
            IntArray
        ) {
            super.onRequestPermissionsResult(requestCode, permissions, grantResults)
            if (requestCode == REQUEST_CODE_PERMISSIONS) {
                if (allPermissionsGranted()) {
                    startCamera()
                } else {
                    Toast.makeText(
                        this,
                        "Permissions not granted by the user.",
                        Toast.LENGTH_SHORT
                    ).show()
                    finish()
                }
            }
        }
    
        private fun startCamera() {
            val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
            cameraProviderFuture.addListener(Runnable {
                val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
                val preview = Preview.Builder()
                    .build()
                    .also {
                        it.setSurfaceProvider(viewFinder.surfaceProvider)
                    }
    
                val imageAnalyzer = ImageAnalysis.Builder()
                    .setTargetAspectRatio(AspectRatio.RATIO_16_9)
                    .build()
                    .also {
                        it.setAnalyzer(cameraExecutor, this@MainActivity)
                    }
    
                val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
                try {
                    cameraProvider.unbindAll()
                    cameraProvider.bindToLifecycle(
                        this, cameraSelector, preview, imageAnalyzer
                    )
                } catch (exc: Exception) {
                    Log.e(TAG, "Use case binding failed", exc)
                }
            }, ContextCompat.getMainExecutor(this))
        }
    
        override fun onDestroy() {
            super.onDestroy()
            cameraExecutor.shutdown()
        }
    
        companion object {
            private const val TAG = "CameraXBasic"
            private const val REQUEST_CODE_PERMISSIONS = 10
            private val REQUIRED_PERMISSIONS = arrayOf(Manifest.permission.CAMERA)
    
        }
    
        @SuppressLint("UnsafeOptInUsageError")
        override fun analyze(image: ImageProxy) {
            image.close()
        }
    
    }
    
    2.增加NDK环境

    因为项目使用了opencv因此需要增加NDK环境,首先在build.gradle增加做如下配置

    android {
        compileSdk 32
    
        defaultConfig {
            applicationId "com.itfitness.opencvcheckface"
            ...省略部分代码
            externalNativeBuild{
                cmake {
                    cppFlags "-frtti -fexceptions -std=c++11"
                    arguments "-DANDROID_STL=c++_shared"
                }
            }
            ndk{
                abiFilters 'armeabi-v7a'
            }
        }
    
       ...省略部分代码
        externalNativeBuild{
            cmake{
                path 'CMakeLists.txt'
            }
        }
    }
    

    然后在app文件夹下新建CMakeLists.txt文件



    然后在app->src->main路径下新建cpp文件夹



    然后我们在cpp文件夹下新建NDKInterface.cpp文件
    3.增加OpenCV库

    接下来,我们打开opencv官网,这里我用的是4.0.1版本的


    下载后解压,然后需要删除多余的文件,只保留有用的就行
    删除OpenCV-android-sdk下的如下文件夹

    删除OpenCV-android-sdk->sdk下的如下文件夹

    然后在我们项目的cpp目录下新建3rdparty文件夹,用来存放三方库,然后将OpenCV-android-sdk文件夹拷贝进去

    然后我们配置CMakeLists.txt文件如下
    cmake_minimum_required(VERSION 3.10)
    
    #OpenCV
    SET(LIBOPENCV_DIR ${CMAKE_CURRENT_SOURCE_DIR}/src/main/cpp/3rdparty/OpenCV-android-sdk)
    INCLUDE_DIRECTORIES(${LIBOPENCV_DIR}/sdk/native/jni/include)
    
    add_library(LIBOPENCV SHARED IMPORTED)
    set_target_properties(LIBOPENCV PROPERTIES IMPORTED_LOCATION ${LIBOPENCV_DIR}/sdk/native/libs/${ANDROID_ABI}/libopencv_java4.so)
    
    
    #设置自己写的文件路径
    SET(CPPDIR ${CMAKE_CURRENT_SOURCE_DIR}/src/main/cpp)
    
    SET(LIB_SRC ${CPPDIR}/NDKInterface.cpp)
    
    add_library(NDKInterface SHARED ${LIB_SRC})
    
    target_link_libraries(
            NDKInterface
            log
            LIBOPENCV
    )
    

    这时Android Studio可能不能识别,我们可以重启Android Studio

    4.加载模型数据

    首先我们先下载需要用到的两个模型数据
    https://raw.githubusercontent.com/opencv/opencv/master/samples/dnn/face_detector/deploy.prototxt
    https://raw.githubusercontent.com/opencv/opencv_3rdparty/dnn_samples_face_detector_20180205_fp16/res10_300x300_ssd_iter_140000_fp16.caffemodel
    然后我们把它们放到SD卡下,这里我为了方便直接手动导入到SD卡了


    然后我们编写加载模型数据的函数
    首先我们创建NativeUtil工具类,用于调用JNI函数
    object NativeUtil {
        init {
            System.loadLibrary("NDKInterface")
            System.loadLibrary("opencv_java4")
        }
    
        /**
         * 加载模型
         */
        external fun ndkInit(protoTxtFilePath:String,modelFilePath:String)
    }
    

    然后实现NDK函数

    dnn::Net model;
    
    extern "C"
    JNIEXPORT void JNICALL
    Java_com_itfitness_opencvcheckface_NativeUtil_ndkInit(JNIEnv *env, jobject thiz,
                                                          jstring proto_txt_file_path,
                                                          jstring model_file_path) {
        const char *prototxt_path = env->GetStringUTFChars(proto_txt_file_path, JNI_FALSE);
    
        const char *model_path = env->GetStringUTFChars(model_file_path, JNI_FALSE);
    
        //加载模型
        model = cv::dnn::readNetFromCaffe(prototxt_path, model_path);
    
        //释放资源
        env->ReleaseStringUTFChars(proto_txt_file_path, prototxt_path);
        env->ReleaseStringUTFChars(model_file_path, model_path);
    }
    
    5.实现人脸识别

    用于人脸识别的图像数据我们是从CameraX的analyze数据回调方法中获取

    override fun analyze(image: ImageProxy) {
            image.close()
        }
    

    由于回调回来的数据不是NV21类型的数据,因此我们需要先将数据转为NV21,我们创建一个ImageUtil类用于转换数据

    object ImageUtil {
        /**
         *
         * YUV_420_888转NV21
         *
         *
         *
         * @param image CameraX ImageProxy
         *
         * @return byte array
         */
        fun yuv420ToNv21(image: ImageProxy): ByteArray{
            val planes = image.planes
            val yBuffer: ByteBuffer = planes[0].buffer
            val uBuffer: ByteBuffer = planes[1].buffer
            val vBuffer: ByteBuffer = planes[2].buffer
            val ySize: Int = yBuffer.remaining()
            val uSize: Int = uBuffer.remaining()
            val vSize: Int = vBuffer.remaining()
            val size = image.width * image.height
            val nv21 = ByteArray(size * 3 / 2)
            yBuffer.get(nv21, 0, ySize)
            vBuffer.get(nv21, ySize, vSize)
            val u = ByteArray(uSize)
            uBuffer.get(u)
            //每隔开一位替换V,达到VU交替
            var pos = ySize + 1
            for (i in 0 until uSize) {
                if (i % 2 == 0) {
                    nv21[pos] = u[i]
                    pos += 2
                }
            }
            return nv21
        }
    }
    

    由于摄像机传过来的图像的角度与我们看到的不一致,因此我们还需要将角度传给NDK函数用于OpenCV处理,此外还需要图像的宽高,这些我们都传给NDK函数处理,代码如下:
    NativeUtil中添加人脸识别JNI函数

    object NativeUtil {
        init {
            System.loadLibrary("NDKInterface")
            System.loadLibrary("opencv_java4")
        }
    
        /**
         * 加载模型
         */
        external fun ndkInit(protoTxtFilePath:String,modelFilePath:String)
    
        /**
         * 人脸检测
         */
        external fun ndkCheckFace(yuvData:ByteArray,rotation:Int,width:Int,height:Int):Array<Rect>
    }
    

    NDK函数如下,我们将检测到的人脸矩形添加到数组并返回

    /**
     * 旋转图像
     * @param mat
     * @param rotation
     */
    void rotateMat(cv::Mat &mat, int rotation) {
        if (rotation == 90) { // portrait
            cv::transpose(mat, mat);
            cv::flip(mat, mat, 1);
        } else if (rotation == 0) { // landscape-left
            cv::flip(mat, mat, 1);
        } else if (rotation == 180) { // landscape-right
            cv::flip(mat, mat, 0);
        }
    }
    extern "C"
    JNIEXPORT jobjectArray JNICALL
    Java_com_itfitness_opencvcheckface_NativeUtil_ndkCheckFace(JNIEnv *env, jobject thiz,
                                                               jbyteArray yuv_data, jint rotation,jint width,jint height) {
        jbyte *yuvBuffer = (jbyte *) env->GetByteArrayElements(yuv_data, JNI_FALSE);
    
        Mat imageSrc(height + height / 2, width, CV_8UC1, (unsigned char *) yuvBuffer);
    
        Mat bgrCVFrame;
        cvtColor(imageSrc, bgrCVFrame, cv::COLOR_YUV2BGR_NV21);
    
        rotateMat(bgrCVFrame,rotation);
    
        Mat blob = dnn::blobFromImage(bgrCVFrame, 1.0, cv::Size(300, 300),
                                              cv::Scalar(104.0, 177.0, 123.0));
        model.setInput(blob);
        Mat detection = model.forward();
    
        Mat detectionMat(detection.size[2], detection.size[3], CV_32F, detection.ptr<float>());
    
        //找到Rect
        jclass rectCls = env->FindClass("android/graphics/Rect");
        jmethodID rect_construct = env->GetMethodID(rectCls, "<init>", "(IIII)V"); //Rect的构造函数
    
        //计算数组长度
        int arrayLength = 0;
        for (int i = 0; i < detectionMat.rows; ++i) {
            float confidence = detectionMat.at<float>(i, 2);
            if (confidence > 0.5) {
                arrayLength++;
            }
        }
    
        jobjectArray faceRectArray = env->NewObjectArray(arrayLength,rectCls,nullptr);
    
        if(arrayLength > 0){
            //索引
            int index = 0;
            //创建Rect数组
            for (int i = 0; i < detectionMat.rows; ++i) {
                float confidence = detectionMat.at<float>(i, 2);
                if (confidence > 0.5) {
                    int xLeftBottom = static_cast<int>(detectionMat.at<float>(i, 3) * bgrCVFrame.cols);
                    int yLeftBottom = static_cast<int>(detectionMat.at<float>(i, 4) * bgrCVFrame.rows);
                    int xRightTop = static_cast<int>(detectionMat.at<float>(i, 5) * bgrCVFrame.cols);
                    int yRightTop = static_cast<int>(detectionMat.at<float>(i, 6) * bgrCVFrame.rows);
                    jobject rect = env->NewObject(rectCls,rect_construct,xLeftBottom,yLeftBottom,xRightTop,yRightTop);
                    env->SetObjectArrayElement(faceRectArray,index,rect);
                    index++;
    //                Rect faceRect((int) xLeftBottom, (int) yLeftBottom, (int) (xRightTop - xLeftBottom),
    //                              (int) (yRightTop - yLeftBottom));
                    //绘制人脸框
    //            rectangle(bgrCVFrame, faceRect, cv::Scalar(0, 255, 0),5);
                }
            }
        }
        return faceRectArray;
    }
    

    相机数据的传递如下

    override fun analyze(image: ImageProxy) {
            val nv21Data = ImageUtil.yuv420ToNv21(image)
            val faceArray = NativeUtil.ndkCheckFace(nv21Data,90,image.width,image.height)
            image.close()
        }
    
    6.自定义View画人脸框

    我们通过自定义View来展示识别的人脸,创建FaceRectView,这里由于控件的宽高与图像的宽高不一致,因此需要通过计算宽高的缩放比例来缩放人脸框

    class FaceRectView: View {
        private val paint:Paint = Paint(Paint.ANTI_ALIAS_FLAG)
        private var faceRectArray:Array<Rect>? = null
        private var faceImageWidthScale = 0f
        private var faceImageHeightScale = 0f
        constructor(context: Context?):this(context, null)
        constructor(context: Context?, attrs: AttributeSet?):this(context, attrs, 0)
        constructor(context: Context?, attrs: AttributeSet?, defStyleAttr: Int):super(context, attrs, defStyleAttr){
            paint.style = Paint.Style.STROKE
            paint.color = Color.GREEN
            paint.strokeWidth = 5.0f
        }
    
        @SuppressLint("DrawAllocation")
        override fun onDraw(canvas: Canvas?) {
            super.onDraw(canvas)
            faceRectArray?.let {
                for(rect in it){
                    //缩放人脸框  
                    val rectF = RectF(rect.left * faceImageWidthScale,
                            rect.top * faceImageHeightScale,
                            rect.right * faceImageWidthScale,
                            rect.bottom * faceImageHeightScale)
                    canvas?.drawRect(rectF,paint)
                }
            }
        }
    
        fun setFaceRect(rectArray: Array<Rect>, faceImageWidth: Int, faceImageHeight: Int){
            this.faceRectArray = rectArray
            //计算缩放比例
            faceImageWidthScale = width.toFloat() / faceImageWidth.toFloat()
            faceImageHeightScale = height.toFloat() / faceImageHeight.toFloat()
            postInvalidate()
        }
    }
    

    然后我们在布局文件中增加人脸框绘制控件

    <?xml version="1.0" encoding="utf-8"?>
    <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
        xmlns:app="http://schemas.android.com/apk/res-auto"
        xmlns:tools="http://schemas.android.com/tools"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        tools:context=".MainActivity">
        <androidx.camera.view.PreviewView
            android:id="@+id/viewFinder"
            android:layout_width="match_parent"
            android:layout_height="match_parent" />
        <com.itfitness.opencvcheckface.FaceRectView
            android:id="@+id/frv_face"
            android:layout_width="match_parent"
            android:layout_height="match_parent"/>
    </FrameLayout>
    

    然后在相机回调的函数中,将人脸识别的数据传到控件中

    override fun analyze(image: ImageProxy) {
            val nv21Data = ImageUtil.yuv420ToNv21(image)
            val faceArray = NativeUtil.ndkCheckFace(nv21Data,90,image.width,image.height)
            //这里由于图像的方向问题,所以传给控件的宽是图像的高,传给控件的高是图像的宽
            frv_face.setFaceRect(faceArray,image.height,image.width)
            image.close()
        }
    

    案例源码

    https://gitee.com/itfitness/opencv-face-check

    相关文章

      网友评论

          本文标题:Android NDK开发:Opencv实现人脸识别

          本文链接:https://www.haomeiwen.com/subject/woijqdtx.html