源码是8.0版本的,地址Android 8.0 Camera2。由于这不是一个AndroidStudio项目,无法直接导入AS,GitHub地址。项目中包和类十分之多,无法一眼看出入口Activity,所以先分析下manifest文件,找到入口Activity。
<activity
android:name="com.android.camera.CameraActivity"
android:clearTaskOnLaunch="true"
android:configChanges="orientation|screenSize|keyboardHidden"
android:label="@string/app_name"
android:launchMode="singleTask"
android:taskAffinity="com.android.camera.CameraActivity"
android:theme="@style/Theme.Camera"
android:windowSoftInputMode="stateAlwaysHidden|adjustPan">
<intent-filter>
<action android:name="android.media.action.STILL_IMAGE_CAMERA" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
<meta-data
android:name="com.android.keyguard.layout"
android:resource="@layout/keyguard_widget" />
</activity>
<activity-alias
android:name="com.android.camera.CameraLauncher"
android:label="@string/app_name"
android:targetActivity="com.android.camera.CameraActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.DEFAULT" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity-alias>
<activity
android:name="com.android.camera.PermissionsActivity"
android:excludeFromRecents="true"
android:label="@string/app_name"
android:parentActivityName="com.android.camera.CameraActivity">
<meta-data
android:name="android.support.PARENT_ACTIVITY"
android:value="com.android.camera.CameraActivity" />
</activity>
<activity
android:name="com.android.camera.CaptureActivity"
android:configChanges="orientation|screenSize|keyboardHidden"
android:label="@string/app_name"
android:theme="@style/Theme.Camera"
android:windowSoftInputMode="stateAlwaysHidden|adjustPan">
<intent-filter>
<action android:name="android.media.action.IMAGE_CAPTURE" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
</activity> <!-- Video camera and capture use the Camcorder label and icon. -->
<activity-alias
android:name="com.android.camera.VideoCamera"
android:label="@string/video_camera_label"
android:targetActivity="com.android.camera.CaptureActivity">
<intent-filter>
<action android:name="android.media.action.VIDEO_CAMERA" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
<intent-filter>
<action android:name="android.media.action.VIDEO_CAPTURE" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
</activity-alias>
<activity
android:name="com.android.camera.SecureCameraActivity"
android:clearTaskOnLaunch="true"
android:configChanges="orientation|screenSize|keyboardHidden"
android:excludeFromRecents="true"
android:label="@string/app_name"
android:taskAffinity="com.android.camera.SecureCameraActivity"
android:theme="@style/Theme.SecureCamera"
android:windowSoftInputMode="stateAlwaysHidden|adjustPan">
<intent-filter>
<action android:name="android.media.action.STILL_IMAGE_CAMERA_SECURE" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
<intent-filter>
<action android:name="android.media.action.IMAGE_CAPTURE_SECURE" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
<meta-data
android:name="com.android.keyguard.layout"
android:resource="@layout/keyguard_widget" />
</activity>
<activity
android:name="com.android.camera.settings.CameraSettingsActivity"
android:configChanges="keyboardHidden|orientation|screenSize"
android:label="@string/mode_settings"
android:theme="@style/Theme.CameraSettings">
</activity>
从上我们可以知道应用的启动Activity为CameraActivity,PermissionsActivity用来做权限请求,CameraSettingsActivity用来做设置页面,而其它的Activity要么为CameraActivity的别名,要么就是直接继承了CameraActivity且什么都没做,也就是说这个应用的主要工作都是在CameraActivity中完成。重点分析CameraActivity。
public class CameraActivity extends QuickActivity
implements AppController, CameraAgent.CameraOpenCallback,
ShareActionProvider.OnShareTargetSelectedListener {
}
可以看到CameraActivity继承并实现了很多东西,先看它的父类QuickActivity。在QuickActivity中有句注释:
The KeyguardManager service can be queried to determine which state we are in.
If started from the lock screen, the activity may be quickly started,
resumed, paused, stopped, and then started and resumed again. This is
problematic for launch time from the lock screen because we typically open the
camera in onResume() and close it in onPause(). These camera operations take
a long time to complete. To workaround it, this class filters out
high-frequency onResume()->onPause() sequences if the KeyguardManager
indicates that we have started from the lock screen.
大意就是说这个类是为了解决从锁屏启动相机会重复调用两遍onResume()的bug。这个类是个虚拟类定义了以下方法给子类重写,一看就知道对应了Activity的生命周期。
/**
* Subclasses should override this in place of {@link Activity#onNewIntent}.
*/
protected void onNewIntentTasks(Intent newIntent) {
}
/**
* Subclasses should override this in place of {@link Activity#onCreate}.
*/
protected void onCreateTasks(Bundle savedInstanceState) {
}
/**
* Subclasses should override this in place of {@link Activity#onStart}.
*/
protected void onStartTasks() {
}
/**
* Subclasses should override this in place of {@link Activity#onResume}.
*/
protected void onResumeTasks() {
}
/**
* Subclasses should override this in place of {@link Activity#onPause}.
*/
protected void onPauseTasks() {
}
/**
* Subclasses should override this in place of {@link Activity#onStop}.
*/
protected void onStopTasks() {
}
/**
* Subclasses should override this in place of {@link Activity#onDestroy}.
*/
protected void onDestroyTasks() {
}
接下来我们我们从CameraActivity的onCreateTasks()开始,这也就是onCreate()(代码有精简)
1.mOneCameraOpener = OneCameraModule.provideOneCameraOpener(
mFeatureConfig,
mAppContext,
mActiveCameraDeviceTracker,
ResolutionUtil.getDisplayMetrics(this));
2.mOneCameraManager = OneCameraModule.provideOneCameraManager();
3.mModuleManager = new ModuleManagerImpl();
4.ModulesInfo.setupModules(mAppContext, mModuleManager, mFeatureConfig);
可以看到是一系列对象的新建,我们看看最后一句代码干了什么
/**ModulesInfo.java**/
public static void setupModules(Context context, ModuleManager moduleManager,
OneCameraFeatureConfig config) {
Resources res = context.getResources();
int photoModuleId = context.getResources().getInteger(R.integer.camera_mode_photo);
registerPhotoModule(moduleManager, photoModuleId, SettingsScopeNamespaces.PHOTO,
config.isUsingCaptureModule());
moduleManager.setDefaultModuleIndex(photoModuleId);
registerVideoModule(moduleManager, res.getInteger(R.integer.camera_mode_video),
SettingsScopeNamespaces.VIDEO);
if (PhotoSphereHelper.hasLightCycleCapture(context)) {
registerWideAngleModule(moduleManager, res.getInteger(R.integer.camera_mode_panorama),
SettingsScopeNamespaces.PANORAMA);
registerPhotoSphereModule(moduleManager,
res.getInteger(R.integer.camera_mode_photosphere),
SettingsScopeNamespaces.PANORAMA);
}
if (RefocusHelper.hasRefocusCapture(context)) {
registerRefocusModule(moduleManager, res.getInteger(R.integer.camera_mode_refocus),
SettingsScopeNamespaces.REFOCUS);
}
if (GcamHelper.hasGcamAsSeparateModule(config)) {
registerGcamModule(moduleManager, res.getInteger(R.integer.camera_mode_gcam),
SettingsScopeNamespaces.PHOTO,
config.getHdrPlusSupportLevel(OneCamera.Facing.BACK));
}
int imageCaptureIntentModuleId = res.getInteger(R.integer.camera_mode_capture_intent);
registerCaptureIntentModule(moduleManager, imageCaptureIntentModuleId,
SettingsScopeNamespaces.PHOTO, config.isUsingCaptureModule());
}
private static void registerPhotoModule(ModuleManager moduleManager, final int moduleId,
final String namespace, final boolean enableCaptureModule) {
moduleManager.registerModule(new ModuleManager.ModuleAgent() {
@Override
public int getModuleId() {
return moduleId;
}
@Override
public boolean requestAppForCamera() {
// The PhotoModule requests the old app camere, while the new
// capture module is using OneCamera. At some point we'll
// refactor all modules to use OneCamera, then the new module
// doesn't have to manage it itself.
return !enableCaptureModule;
}
@Override
public String getScopeNamespace() {
return namespace;
}
@Override
public ModuleController createModule(AppController app, Intent intent) {
Log.v(TAG, "EnableCaptureModule = " + enableCaptureModule);
return enableCaptureModule ? new CaptureModule(app) : new PhotoModule(app);
}
});
}
可以看到就是向moduleManager中注册了很多Module,从名字看这些Module对应着不同的功能,比如拍照、录像、广角、全景、重聚焦。看似是注册各种Module,但是moduleManager中并没有持有这些Module的对象,而是持有了ModuleManager.ModuleAgent这个回调。可以看到在registerPhotoModule()后就把这个Module设置为了默认。回到CameraActivity
/**CameraActivity.onCreateTasks()**/
mCameraAppUI = new CameraAppUI(this,
(MainActivityLayout) findViewById(R.id.activity_root_view), isCaptureIntent());
setModuleFromModeIndex(getModeIndex());
mCurrentModule.init(this, isSecureCamera(), isCaptureIntent());
mCameraAppUI.prepareModuleUI();
这里第一句话去创建了UI,我们重点关心流程,视图暂且不讨论。
getModeIndex()
方法如下:
/**
* Get the current mode index from the Intent or from persistent
* settings.
*/
private int getModeIndex() {
int modeIndex = -1;
int photoIndex = getResources().getInteger(R.integer.camera_mode_photo);
int videoIndex = getResources().getInteger(R.integer.camera_mode_video);
int gcamIndex = getResources().getInteger(R.integer.camera_mode_gcam);
int captureIntentIndex =
getResources().getInteger(R.integer.camera_mode_capture_intent);
String intentAction = getIntent().getAction();
if (MediaStore.INTENT_ACTION_VIDEO_CAMERA.equals(intentAction)
|| MediaStore.ACTION_VIDEO_CAPTURE.equals(intentAction)) {
modeIndex = videoIndex;
} else if (MediaStore.ACTION_IMAGE_CAPTURE.equals(intentAction)
|| MediaStore.ACTION_IMAGE_CAPTURE_SECURE.equals(intentAction)) {
// Capture intent.
modeIndex = captureIntentIndex;
} else if (MediaStore.INTENT_ACTION_STILL_IMAGE_CAMERA.equals(intentAction)
||MediaStore.INTENT_ACTION_STILL_IMAGE_CAMERA_SECURE.equals(intentAction)
|| MediaStore.ACTION_IMAGE_CAPTURE_SECURE.equals(intentAction)) {
modeIndex = mSettingsManager.getInteger(SettingsManager.SCOPE_GLOBAL,
Keys.KEY_CAMERA_MODULE_LAST_USED);
// For upgraders who have not seen the aspect ratio selection screen,
// we need to drop them back in the photo module and have them select
// aspect ratio.
// TODO: Move this to SettingsManager as an upgrade procedure.
if (!mSettingsManager.getBoolean(SettingsManager.SCOPE_GLOBAL,
Keys.KEY_USER_SELECTED_ASPECT_RATIO)) {
modeIndex = photoIndex;
}
} else {
// If the activity has not been started using an explicit intent,
// read the module index from the last time the user changed modes
modeIndex = mSettingsManager.getInteger(SettingsManager.SCOPE_GLOBAL,
Keys.KEY_STARTUP_MODULE_INDEX);
if ((modeIndex == gcamIndex &&
!GcamHelper.hasGcamAsSeparateModule(mFeatureConfig)) || modeIndex < 0) {
modeIndex = photoIndex;
}
}
return modeIndex;
}
这里如果CameraActivity如果没有被一个明确的Intent启动,则将打开上次的module。我们是第一次启动,默认为photoIndex。
再来看setModuleFromModeIndex()
方法:
/**CameraActivity**/
private void setModuleFromModeIndex(int modeIndex) {
ModuleAgent agent = mModuleManager.getModuleAgent(modeIndex);
if (agent == null) {
return;
}
if (!agent.requestAppForCamera()) {
mCameraController.closeCamera(true);
}
mCurrentModeIndex = agent.getModuleId();
mCurrentModule = (CameraModule) agent.createModule(this, getIntent());
}
刚才我们已经向mDoduleManager
中注册了很多ModuleAgent
,这里我们的参数为上面默认的photoIndex,则得到的agent
就是
ModuleManager.ModuleAgent() {
@Override
public int getModuleId() {
return moduleId;
}
@Override
public boolean requestAppForCamera() {
// The PhotoModule requests the old app camere, while the new
// capture module is using OneCamera. At some point we'll
// refactor all modules to use OneCamera, then the new module
// doesn't have to manage it itself.
return !enableCaptureModule;
}
@Override
public String getScopeNamespace() {
return namespace;
}
@Override
public ModuleController createModule(AppController app, Intent intent) {
Log.v(TAG, "EnableCaptureModule = " + enableCaptureModule);
return enableCaptureModule ? new CaptureModule(app) : new PhotoModule(app);
}
}
mCurrentModule = (CameraModule) agent.createModule(this, getIntent());
返回CaptureModule
还是PhotoModule
则取决于enableCaptureModule
,这个参数的实参为config.isUsingCaptureModule()
,而config
来自于CameraActivity中的mFeatureConfig
。我们向上看看这个参数
/**CameraActivity.java**/
mFeatureConfig = OneCameraFeatureConfigCreator.createDefault(getContentResolver(),
getServices().getMemoryManager());
/**OneCameraFeatureConfigCreator.java**/
public static OneCameraFeatureConfig createDefault(ContentResolver contentResolver,
MemoryManager memoryManager) {
// Enable CaptureModule on all M devices.
boolean useCaptureModule = true;
Log.i(TAG, "CaptureModule? " + useCaptureModule);
// HDR+ has multiple levels of support.
HdrPlusSupportLevel hdrPlusSupportLevel =
GcamHelper.determineHdrPlusSupportLevel(contentResolver, useCaptureModule);
return new OneCameraFeatureConfig(useCaptureModule,
buildCaptureModuleDetector(contentResolver),
hdrPlusSupportLevel,
memoryManager.getMaxAllowedNativeMemoryAllocation(),
GservicesHelper.getMaxAllowedImageReaderCount(contentResolver));
}
/**OneCameraFeatureConfig**/
OneCameraFeatureConfig(boolean useCaptureModule,
Function<CameraCharacteristics, CaptureSupportLevel> captureModeDetector,
HdrPlusSupportLevel hdrPlusSupportLevel,
int maxMemoryMB,
int maxAllowedImageReaderCount) {
mUseCaptureModule = useCaptureModule;
mCaptureModeDetector = captureModeDetector;
mHdrPlusSupportLevel = hdrPlusSupportLevel;
mMaxMemoryMB = maxMemoryMB;
mMaxAllowedImageReaderCount = maxAllowedImageReaderCount;
}
public boolean isUsingCaptureModule() {
return mUseCaptureModule;
}
虽然经过了层层包装,但是我们可以看到默认是true,我们依照默认分析下去。也就是最终返回的是CaptureModule对象。简化后就是mCurrentModule = new CaptureModule(CameraActivity.this);
这里会引起CaptureModule的初始化,我们来看看它的初始化。
public CaptureModule(AppController appController) {
this(appController, false);
}
/** Constructs a new capture module. */
public CaptureModule(AppController appController, boolean stickyHdr) {
super(appController);
Profile guard = mProfiler.create("new CaptureModule").start();
mPaused = true;
mMainThread = MainThread.create();
mAppController = appController;
mContext = mAppController.getAndroidContext();
mSettingsManager = mAppController.getSettingsManager();
mStickyGcamCamera = stickyHdr;
mLocationManager = mAppController.getLocationManager();
mPreviewTransformCalculator = new PreviewTransformCalculator(
mAppController.getOrientationManager());
mBurstController = BurstFacadeFactory.create(mContext,
new OrientationLockController() {
@Override
public void unlockOrientation() {
mAppController.getOrientationManager().unlockOrientation();
}
@Override
public void lockOrientation() {
mAppController.getOrientationManager().lockOrientation();
}
},
new BurstReadyStateChangeListener() {
@Override
public void onBurstReadyStateChanged(boolean ready) {
// TODO: This needs to take into account the state of
// the whole system, not just burst.
onReadyStateChanged(false);
}
});
mMediaActionSound = new MediaActionSound();
guard.stop();
}
这里最重要的就是mAppController = appController
,在CaptureActivity中有一个AppController的引用指向了CameraActivity。
至此,我们成功的给CameraAcitivity中的mCurrentModule完成了初始化,它是一个CaptureModule对象,而且mCurrentModule中又有一个指向CameraActivity的引用。
接下来我们返回到CameraActivity中,来看下一句mCurrentModule.init(this, isSecureCamera(), isCaptureIntent());
这里后两个参数在方法中根本没有用到,不知道这里什么意思。方法实现:
/**CaptureModule.init() 有精简 **/
public void init(CameraActivity activity, boolean isSecureCamera, boolean isCaptureIntent) {
mCameraHandler = new Handler(thread.getLooper());
mOneCameraOpener = mAppController.getCameraOpener();
mOneCameraManager = OneCameraModule.provideOneCameraManager();
mUI = new CaptureModuleUI(activity, mAppController.getModuleLayoutRoot(), mUIListener);
mAppController.setPreviewStatusListener(mPreviewStatusListener);
}
mAppController.setPreviewStatusListener(mPreviewStatusListener);
这句话最为重要,我们知道mAppController是CameraActivity的引用,来看CameraActivity的这个方法
/**CameraActivity**/
public void setPreviewStatusListener(PreviewStatusListener previewStatusListener) {
mCameraAppUI.setPreviewStatusListener(previewStatusListener);
}
也就是说这个PreviewStatusListener最终设置给了mCameraAppUI,这个Listener是在CaptureModule中新建的。
/**CaptureModule**/
private final PreviewStatusListener mPreviewStatusListener = new PreviewStatusListener() {
@Override
public void onPreviewLayoutChanged(View v, int left, int top, int right,
int bottom, int oldLeft, int oldTop, int oldRight, int oldBottom) {
int width = right - left;
int height = bottom - top;
updatePreviewTransform(width, height, false);
}
@Override
public boolean shouldAutoAdjustTransformMatrixOnLayout() {
return USE_AUTOTRANSFORM_UI_LAYOUT;
}
@Override
public void onPreviewFlipped() {
// Do nothing because when preview is flipped, TextureView will lay
// itself out again, which will then trigger a transform matrix
// update.
}
@Override
public GestureDetector.OnGestureListener getGestureListener() {
return new GestureDetector.SimpleOnGestureListener() {
@Override
public boolean onSingleTapUp(MotionEvent ev) {
Point tapPoint = new Point((int) ev.getX(), (int) ev.getY());
Log.v(TAG, "onSingleTapUpPreview location=" + tapPoint);
if (!mCameraCharacteristics.isAutoExposureSupported() &&
!mCameraCharacteristics.isAutoFocusSupported()) {
return false;
}
startActiveFocusAt(tapPoint.x, tapPoint.y);
return true;
}
};
}
@Override
public View.OnTouchListener getTouchListener() {
return null;
}
@Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
Log.d(TAG, "onSurfaceTextureAvailable");
// Force to re-apply transform matrix here as a workaround for
// b/11168275
updatePreviewTransform(width, height, true);
synchronized (mSurfaceTextureLock) {
mPreviewSurfaceTexture = surface;
}
reopenCamera();
}
@Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
Log.d(TAG, "onSurfaceTextureDestroyed");
synchronized (mSurfaceTextureLock) {
mPreviewSurfaceTexture = null;
}
closeCamera();
return true;
}
@Override
public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {
Log.d(TAG, "onSurfaceTextureSizeChanged");
updatePreviewBufferSize();
}
@Override
public void onSurfaceTextureUpdated(SurfaceTexture surface) {
if (mState == ModuleState.UPDATE_TRANSFORM_ON_NEXT_SURFACE_TEXTURE_UPDATE) {
Log.d(TAG, "onSurfaceTextureUpdated --> updatePreviewTransform");
mState = ModuleState.IDLE;
CameraAppUI appUI = mAppController.getCameraAppUI();
updatePreviewTransform(appUI.getSurfaceWidth(), appUI.getSurfaceHeight(), true);
}
}
};
注意这个回调,很重要。我们还是回到CameraActivity,接下来的一句的是mCameraAppUI.prepareModuleUI();
/**CameraAppUI.prepareModuleUI() 有删减**/
public void prepareModuleUI() {
mController.getSettingsManager().addListener(this);
mModuleUI = (FrameLayout) mCameraRootView.findViewById(R.id.module_layout);
mTextureView = (TextureView) mCameraRootView.findViewById(R.id.preview_content);
mTextureViewHelper = new TextureViewHelper(mTextureView, mCaptureLayoutHelper,
mController.getCameraProvider(), mController);
mTextureViewHelper.setSurfaceTextureListener(this);
mTextureViewHelper.setOnLayoutChangeListener(mPreviewLayoutChangeListener);
}
这里的TextureView便是我们用来预览的界面,然后包装成了一个TextureViewHelper对象,并且给它设置了一个SurfaceTextureListener的监听事件,
/**CameraAppUI.surfaceTextureListener**/
@Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
mSurface = surface;
mSurfaceWidth = width;
mSurfaceHeight = height;
Log.v(TAG, "SurfaceTexture is available");
if (mPreviewStatusListener != null) {
mPreviewStatusListener.onSurfaceTextureAvailable(surface, width, height);
}
enableModeOptions();
}
@Override
public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {
mSurface = surface;
mSurfaceWidth = width;
mSurfaceHeight = height;
if (mPreviewStatusListener != null) {
mPreviewStatusListener.onSurfaceTextureSizeChanged(surface, width, height);
}
}
@Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
mSurface = null;
Log.v(TAG, "SurfaceTexture is destroyed");
if (mPreviewStatusListener != null) {
return mPreviewStatusListener.onSurfaceTextureDestroyed(surface);
}
return false;
}
@Override
public void onSurfaceTextureUpdated(SurfaceTexture surface) {
mSurface = surface;
if (mPreviewStatusListener != null) {
mPreviewStatusListener.onSurfaceTextureUpdated(surface);
}
// Do not show the first preview frame. Due to the bug b/20724126, we need to have
// a WAR to request a preview frame followed by 5-frame ZSL burst before the repeating
// preview and ZSL streams. Need to hide the first preview frame since it is janky.
// We do this only for L, Nexus 6 and Haleakala.
if (mModeCoverState == COVER_WILL_HIDE_AFTER_NEXT_TEXTURE_UPDATE) {
mModeCoverState = COVER_WILL_HIDE_AT_NEXT_TEXTURE_UPDATE;
} else if (mModeCoverState == COVER_WILL_HIDE_AT_NEXT_TEXTURE_UPDATE){
Log.v(TAG, "hiding cover via onSurfaceTextureUpdated");
CameraPerformanceTracker.onEvent(CameraPerformanceTracker.FIRST_PREVIEW_FRAME);
hideModeCover();
}
}
进入TextureViewHelper我们可以知道到TextureView准备好了后会回调onSurfaceTextureAvailable()
我们看看这个方法
/**TextureViewHelper**/
@Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
// Workaround for b/11168275, see b/10981460 for more details
if (mWidth != 0 && mHeight != 0) {
// Re-apply transform matrix for new surface texture
updateTransform();
}
if (mSurfaceTextureListener != null) {
mSurfaceTextureListener.onSurfaceTextureAvailable(surface, width, height);
}
}
也就是说TextureView可用时,回调了mSurfaceTextureListener.onSurfaceTextureAvailable(surface, width, height);
而这个mSurfaceTextureListener
便是上面那个CameraAppUI中实现的SurfaceTextureViewListener,至此我们知道了到TextureView可用时回调了如下方法
/**CameraAppUI.surfaceTextureListener**/
@Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
mSurface = surface;
mSurfaceWidth = width;
mSurfaceHeight = height;
Log.v(TAG, "SurfaceTexture is available");
if (mPreviewStatusListener != null) {
mPreviewStatusListener.onSurfaceTextureAvailable(surface, width, height);
}
enableModeOptions();
}
而这里的mPreviewStatusListener
便是那个CaptureModule中的实现,所以TextureView可用时,接着会回调
/**CaptureModule**/
@Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
Log.d(TAG, "onSurfaceTextureAvailable");
// Force to re-apply transform matrix here as a workaround for
// b/11168275
updatePreviewTransform(width, height, true);
synchronized (mSurfaceTextureLock) {
mPreviewSurfaceTexture = surface;
}
reopenCamera();
}
至此我们便清楚了,当TextureView可用时最终调用了CaptureModule.reopenCamera()
方法。一路调用下去直到mOneCameraOpener.open()
,我们看下这个方法
/**CaptureModule.openCameraAndStartPreview()-->mOneCameraOpener.open() (有删减)**/
mOneCameraOpener.open(cameraId, captureSetting, mCameraHandler, mainThread,
imageRotationCalculator, mBurstController, mSoundPlayer,
new OpenCallback() {
@Override
public void onFailure() {
}
@Override
public void onCameraClosed() {
}
@Override
public void onCameraOpened(@Nonnull final OneCamera camera) {
camera.startPreview(new Surface(getPreviewSurfaceTexture()),
new CaptureReadyCallback() {
@Override
public void onSetupFailed() {
}
@Override
public void onReadyForCapture() {
mCameraOpenCloseLock.release();
mMainThread.execute(new Runnable() {
@Override
public void run() {
Log.d(TAG, "Ready for capture.");
if (mCamera == null) {
Log.d(TAG, "Camera closed, aborting.");
return;
}
onPreviewStarted();
onReadyStateChanged(true);
mCamera.setReadyStateChangedListener(
CaptureModule.this);
mUI.initializeZoom(mCamera.getMaxZoom());
mCamera.setFocusStateListener(CaptureModule.this);
}
});
}
});
}
}, mAppController.getFatalErrorHandler());
记住这个OpenCallback()的回调我们来看看它在哪里被回调了。这个mOneCameraOpener是从CameraAcitvity中得到的,我们看看它的来历
/** CameraActivity**/
mOneCameraOpener = OneCameraModule.provideOneCameraOpener(
mFeatureConfig,
mAppContext,
mActiveCameraDeviceTracker,
ResolutionUtil.getDisplayMetrics(this));
这里一路跟踪下去,我们很容易看到mOneCameraOpener
就是Camera2OneCameraOpenerImpl
类的一个对象,我们看看他的open方法
/**Camera2OneCameraOpenerImpl.open()**/
@Override
@Override
public void open(
final CameraId cameraKey,
final OneCameraCaptureSetting captureSetting,
final Handler handler,
final MainThread mainThread,
final ImageRotationCalculator imageRotationCalculator,
final BurstFacade burstController,
final SoundPlayer soundPlayer,
final OpenCallback openCallback,
final FatalErrorHandler fatalErrorHandler) {
mCameraManager.openCamera(cameraKey.getValue(), new CameraDevice.StateCallback() {
private boolean isFirstCallback = true;
@Override
public void onDisconnected(CameraDevice device) {
}
@Override
public void onClosed(CameraDevice device) {
}
@Override
public void onError(CameraDevice device, int error) {
}
@Override
public void onOpened(CameraDevice device) {
if (isFirstCallback) {
isFirstCallback = false;
try {
CameraCharacteristics characteristics = mCameraManager
.getCameraCharacteristics(device.getId());
OneCamera oneCamera = OneCameraCreator.create(
device,
characteristics,
mFeatureConfig,
captureSetting,
mDisplayMetrics,
mContext,
mainThread,
imageRotationCalculator,
burstController,
soundPlayer, fatalErrorHandler);
if (oneCamera != null) {
openCallback.onCameraOpened(oneCamera);
} else {
Log.d(TAG, "Could not construct a OneCamera object!");
openCallback.onFailure();
}
} catch (CameraAccessException e) {
Log.d(TAG, "Could not get camera characteristics", e);
openCallback.onFailure();
} catch (OneCameraAccessException e) {
Log.d(TAG, "Could not create OneCamera", e);
openCallback.onFailure();
}
}
}
}, handler);
}
到这里,我们终于看到了相机的打开,这一段的时序图如下
应用打开相机时序图.jpg
相机打开后会回调onOpened()
,可以看到这里首先拿到了打开的相机的属性值,然后创建了一个OneCamera对象,我们先研究下oneCamera
对象的创建过程。创建的时候我们传入了很多参数,这些参数有的来自于Camera2OneCameraOpenerImpl对象创建时传入的,有的来自于调用方法时传入。Camera2OneCameraOpenerImpl的对象创建于CameraActivity,而调用方是CaptureModule。oneCamera
的创建一路看下去,我们会发现它经过了OneCameraCreator、SimpleOneCameraFactory、InitializedOneCameraFactory、GenericOneCameraImpl,每一个类或是对前面传递下来的参数进行了一层包装、或是新建了一些新的对象传递下去,我们需要注意的参数有下几个
/**InitializedOneCameraFactory**/
final CaptureSessionCreator captureSessionCreator = new CaptureSessionCreator(device,
cameraHandler);
PreviewStarter mPreviewStarter = new PreviewStarter(outputSurfaces,
captureSessionCreator,
new PreviewStarter.CameraCaptureSessionCreatedListener() {
@Override
public void onCameraCaptureSessionCreated(CameraCaptureSessionProxy session,
Surface previewSurface) {
CameraStarter.CameraControls controls = cameraStarter.startCamera(
new Lifetime(lifetime),
session, previewSurface,
zoomState, metadataCallback, readyState);
mPictureTaker.set(controls.getPictureTaker());
mManualAutoFocus.set(controls.getManualAutoFocus());
}
});
/**SimpleOneCameraFactory**/
CameraStarter cameraStarter = new CameraStarter() {
@Override
public CameraControls startCamera(Lifetime cameraLifetime,
CameraCaptureSessionProxy cameraCaptureSession,
Surface previewSurface,
Observable<Float> zoomState,
Updatable<TotalCaptureResultProxy> metadataCallback,
Updatable<Boolean> readyState) {
// Create the FrameServer from the CaptureSession.
FrameServerFactory frameServerComponent = new FrameServerFactory(
new Lifetime(cameraLifetime), cameraCaptureSession, new HandlerFactory());
CameraCommandExecutor cameraCommandExecutor = new CameraCommandExecutor(
Loggers.tagFactory(),
new Provider<ExecutorService>() {
@Override
public ExecutorService get() {
// Use a dynamically-expanding thread pool to
// allow any number of commands to execute
// simultaneously.
return Executors.newCachedThreadPool();
}
});
// Create the shared image reader.
SharedImageReaderFactory sharedImageReaderFactory =
new SharedImageReaderFactory(new Lifetime(cameraLifetime), imageReader,
new HandlerFactory());
Updatable<Long> globalTimestampCallback =
sharedImageReaderFactory.provideGlobalTimestampQueue();
ManagedImageReader managedImageReader =
sharedImageReaderFactory.provideSharedImageReader();
// Create the request builder used by all camera operations.
// Streams, ResponseListeners, and Parameters added to
// this will be applied to *all* requests sent to the camera.
RequestTemplate rootBuilder = new RequestTemplate
(new CameraDeviceRequestBuilderFactory(device));
// The shared image reader must be wired to receive every
// timestamp for every image (including the preview).
rootBuilder.addResponseListener(
ResponseListeners.forTimestamps(globalTimestampCallback));
rootBuilder.addStream(new SimpleCaptureStream(previewSurface));
rootBuilder.addResponseListener(ResponseListeners.forFinalMetadata(
metadataCallback));
FrameServer ephemeralFrameServer =
frameServerComponent.provideEphemeralFrameServer();
// Create basic functionality (zoom, AE, AF).
BasicCameraFactory basicCameraFactory = new BasicCameraFactory(new Lifetime
(cameraLifetime),
characteristics,
ephemeralFrameServer,
rootBuilder,
cameraCommandExecutor,
new BasicPreviewCommandFactory(ephemeralFrameServer),
flashSetting,
exposureSetting,
zoomState,
hdrSceneSetting,
CameraDevice.TEMPLATE_PREVIEW);
// Register the dynamic updater via orientation supplier
rootBuilder.setParam(CaptureRequest.JPEG_ORIENTATION,
mImageRotationCalculator.getSupplier());
if (GservicesHelper.isJankStatisticsEnabled(AndroidContext.instance().get()
.getContentResolver())) {
rootBuilder.addResponseListener(
new FramerateJankDetector(Loggers.tagFactory(),
UsageStatistics.instance()));
}
RequestBuilder.Factory meteredZoomedRequestBuilder =
basicCameraFactory.provideMeteredZoomedRequestBuilder();
// Create the picture-taker.
PictureTaker pictureTaker;
if (supportLevel == OneCameraFeatureConfig.CaptureSupportLevel.LEGACY_JPEG) {
pictureTaker = new LegacyPictureTakerFactory(imageSaverBuilder,
cameraCommandExecutor, mainExecutor,
frameServerComponent.provideFrameServer(),
meteredZoomedRequestBuilder, managedImageReader).providePictureTaker();
} else {
pictureTaker = PictureTakerFactory.create(Loggers.tagFactory(), mainExecutor,
cameraCommandExecutor, imageSaverBuilder,
frameServerComponent.provideFrameServer(),
meteredZoomedRequestBuilder, managedImageReader, flashSetting)
.providePictureTaker();
}
// Wire-together ready-state.
final Observable<Integer> availableImageCount = sharedImageReaderFactory
.provideAvailableImageCount();
final Observable<Boolean> frameServerAvailability = frameServerComponent
.provideReadyState();
Observable<Boolean> ready = Observables.transform(
Arrays.asList(availableImageCount, frameServerAvailability),
new Supplier<Boolean>() {
@Override
public Boolean get() {
boolean atLeastOneImageAvailable = availableImageCount.get() >= 1;
boolean frameServerAvailable = frameServerAvailability.get();
return atLeastOneImageAvailable && frameServerAvailable;
}
});
lifetime.add(Observables.addThreadSafeCallback(ready, readyState));
basicCameraFactory.providePreviewUpdater().run();
return new CameraControls(
pictureTaker,
basicCameraFactory.provideManualAutoFocus());
}
};
oneCamera
创建成功了,接着会回调openCallback.oneCameraOpened(oneCamera)
这里的openCallback
便是调用mOneCameraOpener.open()
时传入的。在这个回调里我们用传入的oneCamera
对象oneCamera.startPreview
开启了预览。我们知道oneCamera是一个GenericOneCameraImpl对象,我们来看看这个对象的startPreview()方法。
/***GenericOneCameraImpl***/
@Override
public void startPreview(Surface surface, final CaptureReadyCallback listener) {
ListenableFuture<Void> result = mPreviewStarter.startPreview(surface);
Futures.addCallback(result, new FutureCallback<Void>() {
@Override
public void onSuccess(@Nonnull Void aVoid) {
listener.onReadyForCapture();
}
@Override
public void onFailure(@Nonnull Throwable throwable) {
listener.onSetupFailed();
}
});
}
这里mPreviewStarter
创建于InitializedOneCameraFactory
,并接收了一个PreviewStarter.CameraCaptureSessionCreatedListener()
的回调。我们看看这个方法mPreviewStarter.startPreview(surface);
/**PreviewStarter.startPreview(有删减)**/
public ListenableFuture<Void> startPreview(final Surface surface) {
List<Surface> surfaceList = new ArrayList<>();
final ListenableFuture<CameraCaptureSessionProxy> sessionFuture =
mCaptureSessionCreator.createCaptureSession(surfaceList);
return Futures.transform(sessionFuture,
new AsyncFunction<CameraCaptureSessionProxy, Void>() {
@Override
public ListenableFuture<Void> apply(
CameraCaptureSessionProxy captureSession) throws Exception {
mSessionListener.onCameraCaptureSessionCreated(captureSession, surface);
return Futures.immediateFuture(null);
}
});
}
mCaptureSessionCreator
创建于InitializedOneCameraFactory
,我们看看mCaptureSessionCreator.createCaptureSession
/**CaptureSessionCreator**/
public ListenableFuture<CameraCaptureSessionProxy> createCaptureSession(
List<Surface> surfaces) {
final SettableFuture<CameraCaptureSessionProxy> sessionFuture = SettableFuture.create();
try {
mDevice.createCaptureSession(surfaces, new CameraCaptureSessionProxy.StateCallback() {
@Override
public void onActive(CameraCaptureSessionProxy session) {
}
@Override
public void onConfigureFailed(CameraCaptureSessionProxy session) {
sessionFuture.cancel(true);
session.close();
}
@Override
public void onConfigured(CameraCaptureSessionProxy session) {
boolean valueSet = sessionFuture.set(session);
if (!valueSet) {
session.close();
}
}
@Override
public void onReady(CameraCaptureSessionProxy session) {
}
@Override
public void onClosed(CameraCaptureSessionProxy session) {
sessionFuture.cancel(true);
session.close();
}
}, mCameraHandler);
} catch (CameraAccessException e) {
sessionFuture.setException(e);
}
return sessionFuture;
}
到这里我们终于开始了一次CaptureSession
的会话,并且把sessionFuture
返回了,接着程序会把sessionFuture
转换为CameraCaptureSessionProxy
类型的capureSession
,然后回调mSessionListener.onCameraCaptureSessionCreated(captureSession, surface);
这个mSessionListener
就是创建mPreviewStarter
传入的回调
public void onCameraCaptureSessionCreated(CameraCaptureSessionProxy session,
Surface previewSurface) {
CameraStarter.CameraControls controls = cameraStarter.startCamera(
new Lifetime(lifetime),
session, previewSurface,
zoomState, metadataCallback, readyState);
mPictureTaker.set(controls.getPictureTaker());
mManualAutoFocus.set(controls.getManualAutoFocus());
}
cameraStarter
是在SimpleOneCameraFactory
中创建传递下来的
/**SimpleOneCameraFactory**/
CameraStarter cameraStarter = new CameraStarter() {
@Override
public CameraControls startCamera(Lifetime cameraLifetime,
CameraCaptureSessionProxy cameraCaptureSession,
Surface previewSurface,
Observable<Float> zoomState,
Updatable<TotalCaptureResultProxy> metadataCallback,
Updatable<Boolean> readyState) {
// Create the FrameServer from the CaptureSession.
FrameServerFactory frameServerComponent = new FrameServerFactory(
new Lifetime(cameraLifetime), cameraCaptureSession, new HandlerFactory());
CameraCommandExecutor cameraCommandExecutor = new CameraCommandExecutor(
Loggers.tagFactory(),
new Provider<ExecutorService>() {
@Override
public ExecutorService get() {
// Use a dynamically-expanding thread pool to
// allow any number of commands to execute
// simultaneously.
return Executors.newCachedThreadPool();
}
});
// Create the shared image reader.
SharedImageReaderFactory sharedImageReaderFactory =
new SharedImageReaderFactory(new Lifetime(cameraLifetime), imageReader,
new HandlerFactory());
Updatable<Long> globalTimestampCallback =
sharedImageReaderFactory.provideGlobalTimestampQueue();
ManagedImageReader managedImageReader =
sharedImageReaderFactory.provideSharedImageReader();
// Create the request builder used by all camera operations.
// Streams, ResponseListeners, and Parameters added to
// this will be applied to *all* requests sent to the camera.
RequestTemplate rootBuilder = new RequestTemplate
(new CameraDeviceRequestBuilderFactory(device));
// The shared image reader must be wired to receive every
// timestamp for every image (including the preview).
rootBuilder.addResponseListener(
ResponseListeners.forTimestamps(globalTimestampCallback));
rootBuilder.addStream(new SimpleCaptureStream(previewSurface));
rootBuilder.addResponseListener(ResponseListeners.forFinalMetadata(
metadataCallback));
FrameServer ephemeralFrameServer =
frameServerComponent.provideEphemeralFrameServer();
// Create basic functionality (zoom, AE, AF).
BasicCameraFactory basicCameraFactory = new BasicCameraFactory(new Lifetime
(cameraLifetime),
characteristics,
ephemeralFrameServer,
rootBuilder,
cameraCommandExecutor,
new BasicPreviewCommandFactory(ephemeralFrameServer),
flashSetting,
exposureSetting,
zoomState,
hdrSceneSetting,
CameraDevice.TEMPLATE_PREVIEW);
// Register the dynamic updater via orientation supplier
rootBuilder.setParam(CaptureRequest.JPEG_ORIENTATION,
mImageRotationCalculator.getSupplier());
if (GservicesHelper.isJankStatisticsEnabled(AndroidContext.instance().get()
.getContentResolver())) {
rootBuilder.addResponseListener(
new FramerateJankDetector(Loggers.tagFactory(),
UsageStatistics.instance()));
}
RequestBuilder.Factory meteredZoomedRequestBuilder =
basicCameraFactory.provideMeteredZoomedRequestBuilder();
// Create the picture-taker.
PictureTaker pictureTaker;
if (supportLevel == OneCameraFeatureConfig.CaptureSupportLevel.LEGACY_JPEG) {
pictureTaker = new LegacyPictureTakerFactory(imageSaverBuilder,
cameraCommandExecutor, mainExecutor,
frameServerComponent.provideFrameServer(),
meteredZoomedRequestBuilder, managedImageReader).providePictureTaker();
} else {
pictureTaker = PictureTakerFactory.create(Loggers.tagFactory(), mainExecutor,
cameraCommandExecutor, imageSaverBuilder,
frameServerComponent.provideFrameServer(),
meteredZoomedRequestBuilder, managedImageReader, flashSetting)
.providePictureTaker();
}
// Wire-together ready-state.
final Observable<Integer> availableImageCount = sharedImageReaderFactory
.provideAvailableImageCount();
final Observable<Boolean> frameServerAvailability = frameServerComponent
.provideReadyState();
Observable<Boolean> ready = Observables.transform(
Arrays.asList(availableImageCount, frameServerAvailability),
new Supplier<Boolean>() {
@Override
public Boolean get() {
boolean atLeastOneImageAvailable = availableImageCount.get() >= 1;
boolean frameServerAvailable = frameServerAvailability.get();
return atLeastOneImageAvailable && frameServerAvailable;
}
});
lifetime.add(Observables.addThreadSafeCallback(ready, readyState));
basicCameraFactory.providePreviewUpdater().run();
return new CameraControls(
pictureTaker,
basicCameraFactory.provideManualAutoFocus());
}
};
这段代码有非常多的对象新建,非常复杂。我们上面创建了一次会话后便把session
传入了这里,但是我们一直没有看到重复的请求也就不能一直预览。我们从basicCameraFactory.providePreviewUpdater().run();
看起,推测它和重复预览有关。
/**BasicCameraFactory**/
public Runnable providePreviewUpdater() {
return mPreviewUpdater;
}
mPreviewUpdater = new ResettingRunnableCameraCommand(cameraCommandExecutor,
previewUpdaterCommand);
CameraCommand previewUpdaterCommand =
previewCommandFactory.get(requestTemplate, templateType);
在ResettingRunnableCameraCommand
中执行了previewUpdaterCommand
。这里创建previewUpdaterCommand
的两个参数,requestTemplate
来自于BasicCameraFactory
,而templateType
则是在SimpleOneCameraFactory
中创建basicCameraFactory
时传入的值为CameraDevice.TEMPLATE_PREVIEW
。
而这里的previewCommandFactory
是创建BasicCameraFactory
时传进来的参数new BasicPreviewCommandFactory(ephemeralFrameServer)
。
/**BasicPreviewCommandFactory**/
@Override
public CameraCommand get(RequestBuilder.Factory previewRequestBuilder, int templateType) {
return new PreviewCommand(mFrameServer, previewRequestBuilder, templateType);
}
所以我们可以看到最终被执行的就是这个PreviewCommand
,而这个mFrameServer
就是ephemeralFrameServer
,它创建于SimpleOneCameraFactory.cameraStarter
中
通过上面很容易看到这个mFrameServer
就是SimpleOneCameraFactory.cameraStarter
中的ephemeralFrameServer
/**SimpleOneCameraFactory**/
FrameServerFactory frameServerComponent = new FrameServerFactory(
new Lifetime(cameraLifetime), cameraCaptureSession, new HandlerFactory());
FrameServer ephemeralFrameServer =
frameServerComponent.provideEphemeralFrameServer();
/**FrameServerFactory**/
public FrameServerFactory(Lifetime lifetime, CameraCaptureSessionProxy cameraCaptureSession,
HandlerFactory handlerFactory) {
mEphemeralFrameServer = new FrameServerImpl(new TagDispatchCaptureSession
(cameraCaptureSession, cameraHandler));
}
public FrameServer provideEphemeralFrameServer() {
return mEphemeralFrameServer;
}
即mFrameServer = new FrameServerImpl(new TagDispatchCaptureSession (cameraCaptureSession, cameraHandler));
现在创建PreviewCommand
对象PreviewCommand(mFrameServer, previewRequestBuilder, templateType);
时传入的三个参数我们都知道了,mFrameServer
是一个FrameServerImpl
的对象,previewRequestBuilder
创建于BasicCameraFactory
中是一个RequestTemplate
对象,templateType
的值为CameraDevice.TEMPLATE_PREVIEW
。
接下来我们回到PreviewCammand
的run()
方法,
/**PreviewCommand**/
public void run() throws InterruptedException, CameraAccessException,
CameraCaptureSessionClosedException, ResourceAcquisitionFailedException {
try (FrameServer.Session session = mFrameServer.createExclusiveSession()) {
RequestBuilder photoRequest = mBuilderFactory.create(mRequestType);
session.submitRequest(Arrays.asList(photoRequest.build()),
FrameServer.RequestType.REPEATING);
}
}
这里mBuilderFactory
就是之前的previewRequestBuilder
,mRequestType
就是templateType
。
/**FrameServerImpl**/
public Session createExclusiveSession() throws InterruptedException {
checkState(!mCameraLock.isHeldByCurrentThread(), "Cannot acquire another " +
"FrameServer.Session on the same thread.");
mCameraLock.lockInterruptibly();
return new Session();
}
这里返回了一个FrameServerImpl
中的内部类FrameServerImpl.Session
/**RequestTemplate**/
@Override
public RequestBuilder create(int templateType) throws CameraAccessException {
RequestBuilder builder = mRequestBuilderFactory.create(templateType);
for (Parameter param : mParameters) {
param.addToBuilder(builder);
}
for (ResponseListener listener : mResponseListeners) {
builder.addResponseListener(listener);
}
for (CaptureStream stream : mCaptureStreams) {
builder.addStream(stream);
}
return builder;
}
这里mRequestBuilderFactory
是在RequestTemplat
创建时传入的,而RequestTemplat
是在BasicCameraFactory
中创建的,mRequestBuilderFactory
就是BasicCameraFactory
中的rootTemplate
,而rootTemplate
又是在SimpleOneCameraFactory
中创建BasicCameraFactory
时传入的,最终mRequestBuilderFactory
是SimpleOneCameraFactory
中的rootBuilder
。RequestTemplate rootBuilder = new RequestTemplate (new CameraDeviceRequestBuilderFactory(device));
就是说mRequestBuilderFactory = rootBuilder
,所以mRequestBuilderFactory.create(templateType);
就是rootBuilder.create(templateType)
,最终调用了new CameraDeviceRequestBuilderFactory(device).create(templateType);
/**CameraDeviceRequestBuilderFactory**/
@Override
public RequestBuilder create(int templateType) throws CameraAccessException {
return new RequestBuilder(new CaptureRequestBuilderProxy(
mCameraDevice.createCaptureRequest(templateType)));
}
这里我们创建了一个CaptureRequest
以此来控制图像的输出,并把它包装成了一个RequestBuilder
对象向上返回。
/**FrameServerImpl.Session**/
@Override
public void submitRequest(List<Request> burstRequests, RequestType type)
throws CameraAccessException, InterruptedException,
CameraCaptureSessionClosedException, ResourceAcquisitionFailedException {
synchronized (mLock) {
try {
if (mClosed) {
throw new SessionClosedException();
}
mCaptureSession.submitRequest(burstRequests, type);
} catch (Exception e) {
for (Request r : burstRequests) {
r.abort();
}
throw e;
}
}
}
这里的mCaptureSession
是我们创建FrameServerImpl时给的参数new TagDispatchCaptureSession (cameraCaptureSession, cameraHandler)
/**TagDispatchCaptureSession**/
public void submitRequest(List<Request> burstRequests, FrameServer.RequestType requestType)
throws
CameraAccessException, InterruptedException, CameraCaptureSessionClosedException,
ResourceAcquisitionFailedException {
try {
Map<Object, ResponseListener> tagListenerMap = new HashMap<Object, ResponseListener>();
List<CaptureRequest> captureRequests = new ArrayList<>(burstRequests.size());
for (Request request : burstRequests) {
Object tag = generateTag();
tagListenerMap.put(tag, request.getResponseListener());
CaptureRequestBuilderProxy builder = request.allocateCaptureRequest();
builder.setTag(tag);
captureRequests.add(builder.build());
}
if (requestType == FrameServer.RequestType.REPEATING) {
mCaptureSession.setRepeatingBurst(captureRequests, new
CaptureCallback(tagListenerMap), mCameraHandler);
} else {
mCaptureSession.captureBurst(captureRequests, new
CaptureCallback(tagListenerMap), mCameraHandler);
}
} catch (Exception e) {
for (Request r : burstRequests) {
r.abort();
}
throw e;
}
}
在这里我们终于看到了mCaptureSession.setRepeatingBurst()
这里就可以不断的重复请求画面,从而实现预览。
预览时序图如下:
网友评论