美文网首页
android camera2 相机预览

android camera2 相机预览

作者: lesliefang | 来源:发表于2020-09-13 17:54 被阅读0次

    5.0 以后就推荐使用 camera2 API 了, camera2 API 看似复杂,其实挺清晰的。

    我都是参考官网写的 https://developer.android.google.cn/reference/android/hardware/camera2/package-summary?hl=en

    camera2 支持相机流输出到多个 Surface 上 (SurfaceView, SurfaceTexture , MediaCodec, MediaRecorder, Allocation, and ImageReader),所以写了一个 DEMO, 上面的是 SurfaceView, 下面的是 TextureView,同时预览。 camera1 要实现多 surface 同时预览还麻烦了点得自己手动处理, camera2 原生就支持多 surface 输出。

    对各种 Surface 比较迷的,好好读读这个 https://source.android.com/devices/graphics/architecture

    Surface 就是一个绘制的界面,里面有一个 Buffer, buffer 就是存放绘图数据的,surface 就是一个中间者,生产者把图像数据放到 surface 的 buffer 中,消费者再从 surface 的 buffer 中取出图像数据消费。例如相机作为生产者把预览图像放到 SurfaceTexture 中,我们的app 作为消费者从 SurfaceTexture 拿出 buffer 用 OpenGLES 把 buffer 渲染到屏幕上。

    surface.jpeg
    CameraManager cameraManager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
    String[] cameraIdList = null;
    try {
        cameraIdList = cameraManager.getCameraIdList();
    } catch (CameraAccessException e) {
        e.printStackTrace();
    }
    if (cameraIdList == null) {
        Log.d("camera", "没有摄像头");
        return;
    }
    
    String mCameraId = null;
    for (String cameraId : cameraIdList) {
        try {
            CameraCharacteristics characteristics = cameraManager.getCameraCharacteristics(cameraId);
            Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING);
            // 检查是否有前置摄像头
            if (facing != null && facing == CameraCharacteristics.LENS_FACING_FRONT) {
                mCameraId = cameraId;
                break;
            }
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }
    }
    if (mCameraId == null) {
        Log.d("camera", "无前置摄像头");
        return;
    }
    
    try {
        cameraManager.openCamera(mCameraId, new CameraDevice.StateCallback() {
            @Override
            public void onOpened(@NonNull CameraDevice cameraDevice) {
                mCameraDevice = cameraDevice;
                try {
                    // 添加 2 个预览的  surface
                    List<Surface> surfaceList = new ArrayList<>();
                    surfaceList.add(mSurface);
                    surfaceList.add(mTextureSurface);
                    cameraDevice.createCaptureSession(surfaceList, new CameraCaptureSession.StateCallback() {
                        @Override
                        public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
                            try {
                                CaptureRequest.Builder builder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW); // 预览
                                builder.addTarget(mSurface);
                                builder.addTarget(mTextureSurface);
                                CaptureRequest captureRequest = builder.build();
    
                                // 连续发送预览请求
                                cameraCaptureSession.setRepeatingRequest(captureRequest, new CameraCaptureSession.CaptureCallback() {
                                    @Override
                                    public void onCaptureCompleted(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull TotalCaptureResult result) {
                                    }
    
                                    @Override
                                    public void onCaptureFailed(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull CaptureFailure failure) {
    
                                    }
                                }, mCameraHandler);
    
                            } catch (CameraAccessException e) {
                                e.printStackTrace();
                            }
                        }
    
                        @Override
                        public void onConfigureFailed(@NonNull CameraCaptureSession cameraCaptureSession) {
    
                        }
                    }, mCameraHandler);
    
                } catch (CameraAccessException e) {
                    e.printStackTrace();
                }
            }
    
            @Override
            public void onDisconnected(@NonNull CameraDevice cameraDevice) {
                cameraDevice.close();
            }
    
            @Override
            public void onError(@NonNull CameraDevice cameraDevice, int i) {
                cameraDevice.close();
            }
        }, mCameraHandler);
    } catch (CameraAccessException e) {
        e.printStackTrace();
    }
    

    看总共也没多少行代码就实现了预览,为了演示步骤清晰都是直接在参数中 new 了匿名内类回调

    DEMO: https://github.com/lesliebeijing/Camera2Demo
    参考:这哥们写的系列挺清晰的 https://www.jianshu.com/p/b9d994f2b381

    那么想拿到原始相机流数据比如 YUV 数据 怎么办呢???很简单了加一个 ImageReader 作为输出的 Surface 的就行了。ImageReader 不熟悉的多读几遍 API 文档。

    ImageReader 保存为 jpeg 图片很简单

    final ImageReader imageReader = ImageReader.newInstance(preViewSize.getWidth(), preViewSize.getHeight(), ImageFormat.JPEG, 2);
    imageReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
        @Override
        public void onImageAvailable(ImageReader reader) {
            Image image = reader.acquireLatestImage();
            if (image == null) {
                return;
            }
            Image.Plane[] planes = image.getPlanes();
            for (int i = 0; i < planes.length; i++) {
                Image.Plane plane = planes[i];
                Log.d("leslie", "plan " + i);
                ByteBuffer buffer = plane.getBuffer();
                int pixelStride = plane.getPixelStride();
                int rowStride = plane.getRowStride();
                Log.d("leslie", "buffer size:" + buffer.remaining() + " rowStride:" + rowStride + " pixelStride:" + pixelStride);
                // JPEG 是压缩图片格式所以只有一个 buffer, 直接保存为文件就行了  rowStride=0 pixelStride=0
    
                // 保存一张图片到 SD 卡, 注意写权限。
                // 图片方向不对,需要旋转处理。相机出来的图片是横向的。
                if (!hasSave) {
                    hasSave = true;
                    File dir = getExternalFilesDir(null);
                    File imageFile = new File(dir, "image.jpeg");
                    try {
                        OutputStream outputStream = new FileOutputStream(imageFile);
                        byte[] data = new byte[buffer.remaining()];
                        buffer.get(data);
                        outputStream.write(data);
                        outputStream.close();
                        Log.d("leslie", "save image at path " + imageFile.getAbsolutePath());
                    } catch (FileNotFoundException e) {
                        e.printStackTrace();
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                }
            }
            image.close();  
        }
    }, mCameraHandler);
    surfaceList.add(imageReader.getSurface());
    

    转 YUV 也很简单

    final ImageReader imageReader = ImageReader.newInstance(preViewSize.getWidth(), preViewSize.getHeight(), ImageFormat.YUV_420_888, 2);
    imageReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
        @Override
        public void onImageAvailable(ImageReader reader) {
            Image image = reader.acquireLatestImage();
            if (image == null) {
                return;
            }
    
            int width = image.getWidth();
            int height = image.getHeight();
    
            Image.Plane[] planes = image.getPlanes();
            Image.Plane yPlane = planes[0];
            ByteBuffer yBuffer = yPlane.getBuffer();
            Image.Plane uPlane = planes[1];
            ByteBuffer uBuffer = uPlane.getBuffer();
            Image.Plane vPlane = planes[2];
            ByteBuffer vBuffer = vPlane.getBuffer();
    
            byte[] yData = new byte[width * height];
            yBuffer.get(yData);
            byte[] uData = new byte[width * height / 2 - 1];
            uBuffer.get(uData);
            byte[] vData = new byte[width * height / 2 - 1];
            vBuffer.get(vData);
    
            if (uPlane.getPixelStride() == 1) {
                // y u v 之间间隔都是1,像这样  YYYYYYYYUUVV, 转 I420 最方便
                byte[] i420Data = new byte[width * height * 3 / 2];
                System.arraycopy(yData, 0, i420Data, 0, yData.length); // Y 直接拷贝
                System.arraycopy(uData, 0, i420Data, yData.length, uData.length); // 拷贝U
                System.arraycopy(vData, 0, i420Data, yData.length + uData.length + 1, vData.length); // 拷贝V
            } else if (uPlane.getPixelStride() == 2) {
                // I/GRALLOC: LockFlexLayout: baseFormat: 11, yStride: 1280, ySize: 921600, uOffset: 921600,  uStride: 1280
                // ********* plan 0
                // buffer size:921600 rowStride:1280 pixelStride:1
                // ********* plan 1
                // buffer size:460799 rowStride:1280 pixelStride:2
                // ********* plan 2
                // buffer size:460799 rowStride:1280 pixelStride:2
    
                // U和U, V和V PixelStride 等于2, U 每行下标  0 2 4 6 取到 U, V 同理。
                // 这里转 NV21  :  YYYYYYYYVUVU
                byte[] nv21Data = new byte[width * height * 3 / 2];
                System.arraycopy(yData, 0, nv21Data, 0, yData.length); // Y 直接拷贝
                for (int i = 0; i < uData.length; i = i + 2) {
                    nv21Data[yData.length + i] = vData[i]; // 0 2 4 6 下标先取  V
                    nv21Data[yData.length + i + 1] = uData[i]; // 0 2 4 6 下标再取 U
                }
    
                try {
                    // 转Bitmap
                    YuvImage yuvImage = new YuvImage(nv21Data, ImageFormat.NV21, width, height, null);
                    ByteArrayOutputStream stream = new ByteArrayOutputStream();
                    yuvImage.compressToJpeg(new Rect(0, 0, width, height), 80, stream);
                    Bitmap bitmap = BitmapFactory.decodeByteArray(stream.toByteArray(), 0, stream.size());
                    stream.close();
                    if (bitmap != null) {
                        bitmapSurfaceView.drawBitmap(bitmap);
                    }
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }
            image.close();
        }
    }, mCameraHandler);
    surfaceList.add(imageReader.getSurface());
    

    我的预览宽高是 1280*720 = 921600, 所以 Y 的大小是 921600 。
    U ,V 的大小 460799 = 1280 * 360 - 1 = 460800 - 1。 因为 U,V 的 pixelStride 都是 2 ,所以每隔一个下标才能取到一个 U 或 V。0 ,2 , 4 等偶数下标取到数值,每行的最后一个下标肯定是奇数所以没必要存,所以少了一个字节。

    如果每行后面有补得空白数据,这时计算就复杂一些。

    参考: https://www.jianshu.com/p/c88f3b1c736b
    https://www.polarxiong.com/archives/Android-Image%E7%B1%BB%E6%B5%85%E6%9E%90-%E7%BB%93%E5%90%88YUV_420_888.html

    DEMO: https://github.com/lesliebeijing/Camera2Demo 参考这里面的 MainActivity2.java

    相关文章

      网友评论

          本文标题:android camera2 相机预览

          本文链接:https://www.haomeiwen.com/subject/fpfoektx.html