美文网首页opengl es
Camera2+GLsurfaceview+shader 摄像头

Camera2+GLsurfaceview+shader 摄像头

作者: 码农小白two | 来源:发表于2019-12-25 16:57 被阅读0次

    一、Camera2的大致了解:
    ①、 CameraManager:作为整个框架的入口,用于初始化其他类;
    ②、CameraCharacteristics:通过CameraManager获得,可提供Camera相关参数;
    ③、CameraDevice:通过CameraManager获得,类似之前的Camera类,可以进行预览等操作,例如:设置显示预览的Surface。
    ④、CaptureRequest.Builder:通过CameraDevice获得,可以设置预览的相关配置。
    ⑤、CameraCaptureSession:通过CameraDevice获得,控制通过CaptureRequest.Builder进行预览。
    使用流程:
    CameraManager -> CameraDevice -> CaptureRequest.Builder-> CameraCaptureSession
    (原文链接:https://blog.csdn.net/ccw0054/article/details/80339208
    下面是对代码的分析:
    官方代码:https://github.com/android/camera-samples
    以下是一个demo的代码:

    private void openCamera(int width, int height) {
            if (ContextCompat.checkSelfPermission(getActivity(), Manifest.permission.CAMERA)
                    != PackageManager.PERMISSION_GRANTED) {
                requestCameraPermission();
                return;
            }
            setUpCameraOutputs(width, height);
            Log.d("PCC","setup test");
            Activity activity = getActivity();
            CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);
            try {
                if (!mCameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {
                    throw new RuntimeException("Time out waiting to lock camera opening.");
                }
    
                Log.d("111","cameraid"+mCameraId);
                manager.openCamera("0", mStateCallback, mBackgroundHandler);
            } catch (CameraAccessException e) {
                e.printStackTrace();
            } catch (InterruptedException e) {
                throw new RuntimeException("Interrupted while trying to lock camera opening.", e);
            }
        }
    private void setUpCameraOutputs(int width, int height) {
            Activity activity = getActivity();
            CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);
            try {
                for (String cameraId : manager.getCameraIdList()) {
                    Log.d("PC1",cameraId);
                    CameraCharacteristics characteristics
                            = manager.getCameraCharacteristics(cameraId);
    
                    // We don't use a front facing camera in this sample.
                    Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING);
                    if (facing != null && facing == CameraCharacteristics.LENS_FACING_FRONT) {
                        continue;
                    }
    
                    StreamConfigurationMap map = characteristics.get(
                            CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
                    if (map == null) {
                        continue;
                    }
    
                    // For still image captures, we use the largest available size.
                    Size largest = Collections.max(
                            Arrays.asList(map.getOutputSizes(ImageFormat.JPEG)),
                            new CompareSizesByArea());
                    // Find out if we need to swap dimension to get the preview size relative to sensor
                    // coordinate.
                    int displayRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
                    //noinspection ConstantConditions
                    mSensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
                    boolean swappedDimensions = false;
                    Log.d("PC3",cameraId);
                    switch (displayRotation) {
                        case Surface.ROTATION_0:
                        case Surface.ROTATION_180:
                            if (mSensorOrientation == 90 || mSensorOrientation == 270) {
                                swappedDimensions = true;
                            }
                            break;
                        case Surface.ROTATION_90:
                        case Surface.ROTATION_270:
                            if (mSensorOrientation == 0 || mSensorOrientation == 180) {
                                swappedDimensions = true;
                            }
                            break;
                        default:
                            Log.e(TAG, "Display rotation is invalid: " + displayRotation);
                    }
                    Point displaySize = new Point();
                    activity.getWindowManager().getDefaultDisplay().getSize(displaySize);
                    int rotatedPreviewWidth = width;
                    int rotatedPreviewHeight = height;
                    int maxPreviewWidth = displaySize.x;
                    int maxPreviewHeight = displaySize.y;
    
                    if (swappedDimensions) {
                        rotatedPreviewWidth = height;
                        rotatedPreviewHeight = width;
                        maxPreviewWidth = displaySize.y;
                        maxPreviewHeight = displaySize.x;
                    }
    
                    if (maxPreviewWidth > MAX_PREVIEW_WIDTH) {
                        maxPreviewWidth = MAX_PREVIEW_WIDTH;
                    }
    
                    if (maxPreviewHeight > MAX_PREVIEW_HEIGHT) {
                        maxPreviewHeight = MAX_PREVIEW_HEIGHT;
                    }
                    // Danger, W.R.! Attempting to use too large a preview size could exceed the camera
                    // bus' bandwidth limitation, resulting in gorgeous previews but the storage of
                    // garbage capture data.
                    mPreviewSize = chooseOptimalSize(map.getOutputSizes(SurfaceTexture.class),
                            rotatedPreviewWidth, rotatedPreviewHeight, maxPreviewWidth,
                            maxPreviewHeight, largest);
                    // We fit the aspect ratio of TextureView to the size of preview we picked.
                    int orientation = getResources().getConfiguration().orientation;
                    if (orientation == Configuration.ORIENTATION_LANDSCAPE) {
                        autoFitTextureView.setAspectRatio(
                                mPreviewSize.getWidth(), mPreviewSize.getHeight());
                    } else {
                        autoFitTextureView.setAspectRatio(
                                mPreviewSize.getHeight(), mPreviewSize.getWidth());
                    }
                    // Check if the flash is supported.
                    Boolean available = characteristics.get(CameraCharacteristics.FLASH_INFO_AVAILABLE);
                    mFlashSupported = available == null ? false : available;
                    mImageReader = ImageReader.newInstance(largest.getWidth(), largest.getHeight(),
                            ImageFormat.YUV_420_888, /*maxImages*/5);
    
                    mCameraId = cameraId;
                    return;
                }
            } catch (CameraAccessException e) {
                e.printStackTrace();
            } catch (NullPointerException e) {
                // Currently an NPE is thrown when the Camera2API is used but not supported on the
                // device this code runs.
                ErrorDialog.newInstance(getString(R.string.camera_error))
                        .show(getChildFragmentManager(), FRAGMENT_DIALOG);
            }
        }
    private final CameraDevice.StateCallback mStateCallback = new CameraDevice.StateCallback() {
    
            @Override
            public void onOpened(@NonNull CameraDevice cameraDevice) {
                // This method is called when the camera is opened. We start camera preview here.
                mCameraOpenCloseLock.release();
                mCameraDevice = cameraDevice;
                //createCameraPreviewSession();
            }
    
            @Override
            public void onDisconnected(@NonNull CameraDevice cameraDevice) {
                mCameraOpenCloseLock.release();
                cameraDevice.close();
                mCameraDevice = null;
            }
    
            @Override
            public void onError(@NonNull CameraDevice cameraDevice, int error) {
                mCameraOpenCloseLock.release();
                cameraDevice.close();
                mCameraDevice = null;
                Activity activity = getActivity();
                if (null != activity) {
                    activity.finish();
                }
            }
    
        };
    private void createCameraPreviewSession() {
            try {
                mSurfaceTexture.setDefaultBufferSize(MAX_PREVIEW_WIDTH, MAX_PREVIEW_HEIGHT);
                Surface surface = new Surface(mSurfaceTexture);
                // This is the output Surface we need to start preview.
                // We set up a CaptureRequest.Builder with the output Surface.
                mPreviewRequestBuilder
                        = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
                mPreviewRequestBuilder.addTarget(surface);
                //mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
    
                // Here, we create a CameraCaptureSession for camera preview.
                mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
                        new CameraCaptureSession.StateCallback() {
    
                            @Override
                            public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
                                // The camera is already closed
                                if (null == mCameraDevice) {
                                    return;
                                }
                                // When the session is ready, we start displaying the preview.
                                mCaptureSession = cameraCaptureSession;
                                // Auto focus should be continuous for camera preview.
                                mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
                                        CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
                                // Flash is automatically enabled when necessary.
                                setAutoFlash(mPreviewRequestBuilder);
                                // Finally, we start displaying the camera preview.
                                mPreviewRequest = mPreviewRequestBuilder.build();
                                try {
                                    mCaptureSession.setRepeatingRequest(mPreviewRequest,
                                            null, mBackgroundHandler);
                                } catch (CameraAccessException e) {
                                    e.printStackTrace();
                                }
                            }
    
                            @Override
                            public void onConfigureFailed(
                                    @NonNull CameraCaptureSession cameraCaptureSession) {
                                showToast("Failed");
                            }
                        }, mBackgroundHandler
                );
            } catch (CameraAccessException e) {
                e.printStackTrace();
            }
        }
    

    代码中我们可以看到openCamera这个函数首先会设置一个输出的大小以使得我们camera得到的data不被拉伸以至于造成变形,然后会 CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);获取camera的服务以启动manager.openCamera("0", mStateCallback, mBackgroundHandler),这里的mStateCallback就对应device回调的一个状态,也是camera2特有的一个特性,告诉我们摄像头的状态。camera open之后我们需要创建一个createCameraPreviewSession活动,在这个函数我们创建的surfacetexture与纹理ID进行绑定,同时与mPreviewRequestBuilder进行通信,使得我们可以正常显示预览数据。
    二、GLsurfaceview的大致了解:
    GLsurfaceview作为surfaceview的补充,加入了EGL的管理(个人理解:极大的提供了一个接口,让我们方便的使用Opengl ES),一提到GLsurfaceview首先会想到render,在render中往往存在以下代码:

     private GLSurfaceView.Renderer renderer=new GLSurfaceView.Renderer() {
            @Override
            public void onSurfaceCreated(GL10 gl10, EGLConfig eglConfig) {
              
           }
            @Override
            public void onSurfaceChanged(GL10 gl10, int width, int height) {
               
            }
    
            @Override
            public void onDrawFrame(GL10 gl10) {
              
            }
        };
    

    render中分为onSurfaceCreated、onSurfaceChanged和onDrawFrame三大块,onSurfaceCreated中我们常常会创建一个纹理坐标和顶点坐标,而纹理中我们会使用一个纹理ID去绑定我们的surfaceTexture,这样就可以将纹理加载到我们的view中,而顶点坐标常常是用来确定顶点的颜色,以及坐标的对应。通过纹理我们可以获取到一个对象,通过顶点我们可以将我们的对象按照不同需求绘制出来!在做预览的时候,首先需要打开摄像头,也就是一中的 openCamera函数,接着我们会创建一个纹理对象(ID),创建好之后需要我们加载纹理坐标和顶点坐标,加载顶点时候,我们可以更改着色代码,对像素进行操作,这些着色语言位于raw文件下,其中包含demo所写的三种滤镜的实现。
    二值:

    binary.png
    边缘:
    edge.png
    9分屏:
    shared9.png
    github代码:https://github.com/Frank1481906280/GlCV4Android

    相关文章

      网友评论

        本文标题:Camera2+GLsurfaceview+shader 摄像头

        本文链接:https://www.haomeiwen.com/subject/qogcoctx.html