美文网首页
Android多媒体框架--15:MediaCodec解析

Android多媒体框架--15:MediaCodec解析

作者: DarcyZhou | 来源:发表于2023-04-29 08:32 被阅读0次

    "本文转载自:[yanbixing123]的Android MultiMedia框架完全解析 - MediaCodec解析"

    1.概述

      MediaCodec是一个Codec,通过硬件加速解码和编码。它为芯片厂商和应用开发者搭建了一个统一接口。MediaCodec几乎是所有安卓播放器硬解的标配,要深入分析一个播放器的源码,如NuPlayer,ijkplayer,有必要了解其基础的使用方法。

      先来看看MediaCodec在NuPlayer中的位置:

    01.png

    同样,想要深入了解MediaCodec,首先需要知道它在Android App中是如何使用的。

    1.1 MediaCodecList

      MediaCodec是一个统一API,支持不同编码格式,在创建MediaCodec的时候需要根据视频编码选择合适的解码器。这是通过 MediaCodec.createByCodecName 完成的。

      然而不同厂商提供的解码器名称有所不同,编写通用播放器的时候,无法预见。所以Android API中提供了一个MediaCodecList用于枚举设备支持的编解码器的名字、能力,以查找合适的编解码器。

      我们先枚举下设备支持的解码器:

    private void displayDecoders() {                                            
        MediaCodecList list = new MediaCodecList(MediaCodecList.REGULAR_CODECS);
        MediaCodecInfo[] codecs = list.getCodecInfos();                         
        for (MediaCodecInfo codec : codecs) {                                   
            if (codec.isEncoder())                                              
                continue;                                                       
            Log.i(TAG, "displayDecoders: "+codec.getName());                    
        }                                                                       
    }
    

    1.2 创建MediaCodec

      所以,一种比较合理创建MediaCodec的方法是:

    MediaFormat selTrackFmt = chooseVideoTrack(extractor);
    codec = createCodec(selTrackFmt, surface);
    
    private MediaFormat chooseVideoTrack(MediaExtractor extractor) {         
        int count = extractor.getTrackCount();                               
        for (int i = 0; i < count; i++) {                                    
            MediaFormat format = extractor.getTrackFormat(i);                
            if (format.getString(MediaFormat.KEY_MIME).startsWith("video/")){
                extractor.selectTrack(i);//选择轨道                                    
                return format;                                               
            }                                                                
        }                                                                    
        return null;                                                         
    }  
    
    private MediaCodec createCodec(MediaFormat format, Surface surface) throws IOException{     
        MediaCodecList codecList = new MediaCodecList(MediaCodecList.REGULAR_CODECS);           
        MediaCodec codec = MediaCodec.createByCodecName(codecList.findDecoderForFormat(format));
        codec.configure(format, surface, null, 0);                                              
        return codec;                                                                           
    }
    

    先使用MediaExtractor来选择需要的轨道,这里我们简单选择第一条视频轨道。轨道选择后,从MediaExtractor那里取到了目标轨道的MediaFormat,然后就可以通过codecList.findDecoderForFormat(format)获取到最合适的解码器了。

    1.3 MediaCodec的使用方式

      MediaCodec有两种使用方式 —— 同步和异步。MediaCodec可以处理具体的视频流,主要有这几个方法:

    • getInputBuffers:获取需要编码数据的输入流队列,返回的是一个ByteBuffer数组;

    • queueInputBuffer:输入流入队列;

    • dequeueInputBuffer:从输入流队列中取数据进行编码操作;

    • getOutputBuffers:获取编解码之后的数据输出流队列,返回的是一个ByteBuffer数组;

    • dequeueOutputBuffer:从输出队列中取出编码操作之后的数据;

    • releaseOutputBuffer:处理完成,释放ByteBuffer数据。

    1.3.1 同步方法

      同步的主流程是在一个循环内不断调用dequeueInputBuffer -> queueInputBuffer填充数据 -> dequeueOutputBuffer -> releaseOutputBuffer显示画面。与MediaExtractor配合,给MediaCodec喂数据的一般流程是这样的:

    if (inIndex >= 0) {                                                                    
        ByteBuffer buffer = codec.getInputBuffer(inIndex);                                 
        int sampleSize = extractor.readSampleData(buffer, 0);                              
        if (sampleSize < 0) {                                                              
            codec.queueInputBuffer(inIndex, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
            isEOS = true;                                                                  
        } else {                                                                           
            long sampleTime = extractor.getSampleTime();                                   
            codec.queueInputBuffer(inIndex, 0, sampleSize, sampleTime, 0);                 
            extractor.advance();                                                           
        }                                                                                  
    }
    

    通过MediaExtractor的readSampleData和advance我们可以从视频文件中不断取到所选轨道的未解码数据。当sampleSize大于等于0,说明取到了数据,通过queueInputBuffer通知MediaCodec inIndex的input buffer已经准备好了。如果返回值小于0,说明已经到的文件末尾,此时可以通过BUFFER_FLAG_END_OF_STREAM标记通知MediaCodec已经达到文件末尾。

      从MediaCodec取解码后的数据(实际拿到的是一个Index),一般流程是这样的:

    int outIndex = codec.dequeueOutputBuffer(info, 10000);          
    Log.i(TAG, "run: outIndex="+outIndex);                          
    switch (outIndex) {                                             
        case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:                 
            Log.i(TAG, "run: new format: "+codec.getOutputFormat());
            break;                                                  
        case MediaCodec.INFO_TRY_AGAIN_LATER:                       
            Log.i(TAG, "run: try later");                           
            break;                                                  
        case MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED:                
            Log.i(TAG, "run: output buffer changed");               
            break;                                                  
        default:                                                    
            codec.releaseOutputBuffer(outIndex, true);              
            break;                                                  
    }                                                               
    
    if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) { 
        Log.i(TAG, "OutputBuffer BUFFER_FLAG_END_OF_STREAM");       
        break;                                                      
    }
    

    dequeueOutputBuffer从MediaCodec中获取已经解码好的一帧的索引,再通过releaseOutputBuffer(outIndex, true)就可以让MediaCodec将这一帧输出到Surface上。(如果想控制播放速率,或丢帧,可以在这里控制)

      dequeueOutputBuffer还会在特殊情况返回一些辅助值,如视频格式变化、或解码数据未准备好等。

    1.3.2 异步方法

    codec.setCallback(new MediaCodec.Callback() {                                                     
        @Override                                                                                     
        public void onInputBufferAvailable(MediaCodec codec, int index) {                             
            ByteBuffer buffer = codec.getInputBuffer(index);                                          
            int sampleSize = extractor.readSampleData(buffer, 0);                                     
            if (sampleSize < 0) {                                                                     
                codec.queueInputBuffer(index, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);         
            } else {                                                                                  
                long sampleTime = extractor.getSampleTime();                                          
                codec.queueInputBuffer(index, 0, sampleSize, sampleTime, 0);                          
                extractor.advance();                                                                  
            }                                                                                         
        }                                                                                             
    
        @Override                                                                                     
        public void onOutputBufferAvailable(MediaCodec codec, int index, MediaCodec.BufferInfo info) {
            codec.releaseOutputBuffer(index, true);                                                   
        }                                                                                             
    
        @Override                                                                                     
        public void onError(MediaCodec codec, MediaCodec.CodecException e) {                          
            Log.e(TAG, "onError: "+e.getMessage());                                                   
        }                                                                                             
    
        @Override                                                                                     
        public void onOutputFormatChanged(MediaCodec codec, MediaFormat format) {                     
            Log.i(TAG, "onOutputFormatChanged: "+format);                                             
        }                                                                                             
    });                                                                                               
    codec.start();
    

    异步播放的方式是给MediaCodec注册了一个回调,由MediaCodec通知何时input buffer可用,何时output buffer可用,何时format changed等。

      我们需要做的就是,当inuput buffer可用时,就把MediaExtractor提取的数据填充给指定buffer;当output buffer可用时,决定是否显示该帧(实际应用中,不会立即显示它,而是要根据fps控制显示速度)。

    2.MediaCodec实现

      根据上面的MediaCodec的使用方式,提炼出来的几行重要代码:

    MediaCodec codec = MediaCodec.createByCodecName("video/avc");
    MediaFormat format = MediaFormat.createVideoFormat("video/avc", 320, 480);
    // format配置的代码...
    codec.configure(format, surface, null, 0);
    codec.start()
    

    2.1 createByCodecName()

      先来看下MediaCodec.createByCodecName()方法,这个方法事实上包装一些参数后来到:

    • MediaCodec.java
    private MediaCodec(
                @NonNull String name, boolean nameIsType, boolean encoder) {
            Looper looper;
            if ((looper = Looper.myLooper()) != null) {
                mEventHandler = new EventHandler(this, looper);
            } else if ((looper = Looper.getMainLooper()) != null) {
                mEventHandler = new EventHandler(this, looper);
            } else {
                mEventHandler = null;
            }
            mCallbackHandler = mEventHandler;
            mOnFrameRenderedHandler = mEventHandler;
    
            mBufferLock = new Object();
    
            native_setup(name, nameIsType, encoder);
        }
    

    初始化了一个Handler和一个缓冲对象的对象锁。接着来看native_setup()方法:

    static void android_media_MediaCodec_native_setup(
            JNIEnv *env, jobject thiz,
            jstring name, jboolean nameIsType, jboolean encoder) {
        // ... 省略部分代码
    
        const char *tmp = env->GetStringUTFChars(name, NULL);
    
        if (tmp == NULL) {
            return;
        }
    
        sp codec = new JMediaCodec(env, thiz, tmp, nameIsType, encoder);
    
        const status_t err = codec->initCheck();
        // ... 省略部分代码
        codec->registerSelf();
    
        setMediaCodec(env,thiz, codec);
    }
    

    看到JMediaCodec的构造方法:

    JMediaCodec::JMediaCodec(
            JNIEnv *env, jobject thiz,
            const char *name, bool nameIsType, bool encoder)
        : mClass(NULL),
          mObject(NULL) {
        jclass clazz = env->GetObjectClass(thiz);
        CHECK(clazz != NULL);
    
        mClass = (jclass)env->NewGlobalRef(clazz);
        mObject = env->NewWeakGlobalRef(thiz);
    
        cacheJavaObjects(env);
    
        mLooper = new ALooper;
        mLooper->setName("MediaCodec_looper");
    
        mLooper->start(
                false,      // runOnCallingThread
                true,       // canCallJava
                PRIORITY_FOREGROUND);
    
        if (nameIsType) {
            mCodec = MediaCodec::CreateByType(mLooper, name, encoder, &mInitStatus);
        } else {
            mCodec = MediaCodec::CreateByComponentName(mLooper, name, &mInitStatus);
        }
        CHECK((mCodec != NULL) != (mInitStatus != OK));
    }
    

    JMediaCodec同样有一个mLooper成员变量,这个变量是一个ALooper对象,初始化mLooper后开始start(),start()方法内部会开启一个线程并run。nameIsType此时为true,来到MediaCodec::CreateByType()方法:

    sp MediaCodec::CreateByType(
            const sp &looper, const AString &mime, bool encoder, status_t *err, pid_t pid) {
        // 创建MediaCodec对象
        sp codec = new MediaCodec(looper, pid);
    
        // MediaCodec初始化
        const status_t ret = codec->init(mime, true /* nameIsType */, encoder);
        if (err != NULL) {
            *err = ret;
        }
        return ret == OK ? codec : NULL; // NULL deallocates codec.
    }
    

    此时传入的Looper对象是android_media_MediaCodec中JMediaCodec对象的mLooper成员变量,在Android消息队列机制中一个线程只能和一个Looper实例对应。接着看MediaCodec的构造函数:

    MediaCodec::MediaCodec(const sp &looper, pid_t pid)
        : mState(UNINITIALIZED),
          mReleasedByResourceManager(false),
          mLooper(looper),
          mCodec(NULL),
          mReplyID(0),
          mFlags(0),
          mStickyError(OK),
          mSoftRenderer(NULL),
          mResourceManagerClient(new ResourceManagerClient(this)),
          mResourceManagerService(new ResourceManagerServiceProxy(pid)),
          mBatteryStatNotified(false),
          mIsVideo(false),
          mVideoWidth(0),
          mVideoHeight(0),
          mRotationDegrees(0),
          mDequeueInputTimeoutGeneration(0),
          mDequeueInputReplyID(0),
          mDequeueOutputTimeoutGeneration(0),
          mDequeueOutputReplyID(0),
          mHaveInputSurface(false),
          mHavePendingInputBuffers(false) {
    }
    

    就是参数列表,创建MediaCodec对象后,会执行MediaCodec:init()方法:

    status_t MediaCodec::init(const AString &name, bool nameIsType, bool encoder) {
        mResourceManagerService->init();
    
        // save init parameters for reset
        mInitName = name;
        mInitNameIsType = nameIsType;
        mInitIsEncoder = encoder;
    
        // Current video decoders do not return from OMX_FillThisBuffer
        // quickly, violating the OpenMAX specs, until that is remedied
        // we need to invest in an extra looper to free the main event
        // queue.
    
        // 这里获取到的是ACodec
        mCodec = GetCodecBase(name, nameIsType);
        if (mCodec == NULL) {
            return NAME_NOT_FOUND;
        }
    
        bool secureCodec = false;
        if (nameIsType && !strncasecmp(name.c_str(), "video/", 6)) {
            mIsVideo = true;
        } else {
            AString tmp = name;
            if (tmp.endsWith(".secure")) {
                secureCodec = true;
                tmp.erase(tmp.size() - 7, 7);
            }
            // 获取MediaCodecList实例
            const sp mcl = MediaCodecList::getInstance();
            if (mcl == NULL) {
                mCodec = NULL;  // remove the codec.
                return NO_INIT; // if called from Java should raise IOException
            }
            // 根据name查找解码器
            ssize_t codecIdx = mcl->findCodecByName(tmp.c_str());
            if (codecIdx >= 0) {
                const sp info = mcl->getCodecInfo(codecIdx);
                Vector mimes;
                info->getSupportedMimes(&mimes);
                for (size_t i = 0; i < mimes.size(); i++) {
                    if (mimes[i].startsWith("video/")) {
                        mIsVideo = true;
                        break;
                    }
                }
            }
        }
    
        if (mIsVideo) {
            // video codec needs dedicated looper
            if (mCodecLooper == NULL) {
                mCodecLooper = new ALooper;
                mCodecLooper->setName("CodecLooper");
                mCodecLooper->start(false, false, ANDROID_PRIORITY_AUDIO);
            }
    
            mCodecLooper->registerHandler(mCodec);
        } else {
            mLooper->registerHandler(mCodec);
        }
    
        mLooper->registerHandler(this);
    
        mCodec->setNotificationMessage(new AMessage(kWhatCodecNotify, this));
    
        sp msg = new AMessage(kWhatInit, this);
        msg->setString("name", name);
        msg->setInt32("nameIsType", nameIsType);
    
        if (nameIsType) {
            msg->setInt32("encoder", encoder);
        }
    
        status_t err;
        Vector resources;
        MediaResource::Type type =
                secureCodec ? MediaResource::kSecureCodec : MediaResource::kNonSecureCodec;
        MediaResource::SubType subtype =
                mIsVideo ? MediaResource::kVideoCodec : MediaResource::kAudioCodec;
        resources.push_back(MediaResource(type, subtype, 1));
        for (int i = 0; i <= kMaxRetry; ++i) {
            if (i > 0) {
                // Don't try to reclaim resource for the first time.
                if (!mResourceManagerService->reclaimResource(resources)) {
                    break;
                }
            }
    
            sp response;
            // 发送kWhatInit消息
            err = PostAndAwaitResponse(msg, &response);
            if (!isResourceError(err)) {
                break;
            }
        }
        return err;
    }
    

    这里用GetCodecBase()方法初始化了mCodec成员变量:

    sp MediaCodec::GetCodecBase(const AString &name, bool nameIsType) {
        // at this time only ACodec specifies a mime type.
        if (nameIsType || name.startsWithIgnoreCase("omx.")) {
            return new ACodec;
        } else if (name.startsWithIgnoreCase("android.filter.")) {
            return new MediaFilter;
        } else {
            return NULL;
        }
    }
    

    这里nameIsType为true,就会构建一个ACodec实例:

    ACodec::ACodec()
        : mQuirks(0),
          mNode(0),
          mUsingNativeWindow(false),
          mNativeWindowUsageBits(0),
          mLastNativeWindowDataSpace(HAL_DATASPACE_UNKNOWN),
          mIsVideo(false),
          mIsEncoder(false),
          mFatalError(false),
          mShutdownInProgress(false),
          mExplicitShutdown(false),
          mIsLegacyVP9Decoder(false),
          mEncoderDelay(0),
          mEncoderPadding(0),
          mRotationDegrees(0),
          mChannelMaskPresent(false),
          mChannelMask(0),
          mDequeueCounter(0),
          mInputMetadataType(kMetadataBufferTypeInvalid),
          mOutputMetadataType(kMetadataBufferTypeInvalid),
          mLegacyAdaptiveExperiment(false),
          mMetadataBuffersToSubmit(0),
          mNumUndequeuedBuffers(0),
          mRepeatFrameDelayUs(-1ll),
          mMaxPtsGapUs(-1ll),
          mMaxFps(-1),
          mTimePerFrameUs(-1ll),
          mTimePerCaptureUs(-1ll),
          mCreateInputBuffersSuspended(false),
          mTunneled(false),
          mDescribeColorAspectsIndex((OMX_INDEXTYPE)0),
          mDescribeHDRStaticInfoIndex((OMX_INDEXTYPE)0) {
        mUninitializedState = new UninitializedState(this);
        mLoadedState = new LoadedState(this);
        mLoadedToIdleState = new LoadedToIdleState(this);
        mIdleToExecutingState = new IdleToExecutingState(this);
        mExecutingState = new ExecutingState(this);
    
        mOutputPortSettingsChangedState =
            new OutputPortSettingsChangedState(this);
    
        mExecutingToIdleState = new ExecutingToIdleState(this);
        mIdleToLoadedState = new IdleToLoadedState(this);
        mFlushingState = new FlushingState(this);
    
        mPortEOS[kPortIndexInput] = mPortEOS[kPortIndexOutput] = false;
        mInputEOSResult = OK;
    
        memset(&mLastNativeWindowCrop, 0, sizeof(mLastNativeWindowCrop));
    
        changeState(mUninitializedState);
    }
    

    先回到刚才的MediaCodec::init()方法来,看到后面的一段代码:

    if (mIsVideo) {
            // video codec needs dedicated looper
            if (mCodecLooper == NULL) {
                mCodecLooper = new ALooper;
                mCodecLooper->setName("CodecLooper");
                mCodecLooper->start(false, false, ANDROID_PRIORITY_AUDIO);
            }
    
            mCodecLooper->registerHandler(mCodec);
        } else {
            mLooper->registerHandler(mCodec);
        }
    
        mLooper->registerHandler(this);
    
        mCodec->setNotificationMessage(new AMessage(kWhatCodecNotify, this));
    
        sp msg = new AMessage(kWhatInit, this);
        msg->setString("name", name);
        msg->setInt32("nameIsType", nameIsType);
    
        if (nameIsType) {
            msg->setInt32("encoder", encoder);
        }
    
        status_t err;
        Vector resources;
        MediaResource::Type type =
                secureCodec ? MediaResource::kSecureCodec : MediaResource::kNonSecureCodec;
        MediaResource::SubType subtype =
                mIsVideo ? MediaResource::kVideoCodec : MediaResource::kAudioCodec;
        resources.push_back(MediaResource(type, subtype, 1));
        for (int i = 0; i <= kMaxRetry; ++i) {
            if (i > 0) {
                // Don't try to reclaim resource for the first time.
                if (!mResourceManagerService->reclaimResource(resources)) {
                    break;
                }
            }
    
            sp response;
            // 发送kWhatInit消息
            err = PostAndAwaitResponse(msg, &response);
            if (!isResourceError(err)) {
                break;
            }
        }
    

    在这里init()方法发送了一条kWhatInit的Msg,对应的回调MediaCodec::onMessageReceived()方法:

    case kWhatInit:
            {
                sp replyID;
                CHECK(msg->senderAwaitsResponse(&replyID));
    
                if (mState != UNINITIALIZED) {
                    PostReplyWithError(replyID, INVALID_OPERATION);
                    break;
                }
    
                mReplyID = replyID;
                // 设置MedaiCodec的状态
                setState(INITIALIZING);
    
                AString name;
                CHECK(msg->findString("name", &name));
    
                int32_t nameIsType;
                int32_t encoder = false;
                CHECK(msg->findInt32("nameIsType", &nameIsType));
                if (nameIsType) {
                    CHECK(msg->findInt32("encoder", &encoder));
                }
    
                sp format = new AMessage;
    
                if (nameIsType) {
                    format->setString("mime", name.c_str());
                    format->setInt32("encoder", encoder);
                } else {
                    format->setString("componentName", name.c_str());
                }
    
                mCodec->initiateAllocateComponent(format);
                break;
            }
    

    主要是改变内部状态为INITIALIZING,同时调用了mCodec(此时为ACodec)的initiateAllocateComponent()方法:

    void ACodec::initiateAllocateComponent(const sp &msg) {
        msg->setWhat(kWhatAllocateComponent);
        msg->setTarget(this);
        msg->post();
    }
    

    同样的按照消息队列机制找到回调:

    ACodec::UninitializedState::onMessageReceived(const sp &msg)
     case ACodec::kWhatAllocateComponent:
            {
                onAllocateComponent(msg);
                handled = true;
                break;
            }
    

    MediaCodec::UninitializedState::onAllocateComponent()

    bool ACodec::UninitializedState::onAllocateComponent(const sp &msg) {
        ALOGV("onAllocateComponent");
    
        CHECK(mCodec->mNode == 0);
    
        OMXClient client;
        // 客户端链接OMX服务
        if (client.connect() != OK) {
            mCodec->signalError(OMX_ErrorUndefined, NO_INIT);
            return false;
        }
    
        sp omx = client.interface();
    
        sp notify = new AMessage(kWhatOMXDied, mCodec);
    
        Vector matchingCodecs;
    
        AString mime;
    
        AString componentName;
        uint32_t quirks = 0;
        int32_t encoder = false;
        if (msg->findString("componentName", &componentName)) {
            sp list = MediaCodecList::getInstance();
            if (list != NULL && list->findCodecByName(componentName.c_str()) >= 0) {
                matchingCodecs.add(componentName);
            }
        } else {
            CHECK(msg->findString("mime", &mime));
    
            if (!msg->findInt32("encoder", &encoder)) {
                encoder = false;
            }
    
            // 找出符合要求的解码器
            MediaCodecList::findMatchingCodecs(
                    mime.c_str(),
                    encoder, // createEncoder
                    0,       // flags
                    &matchingCodecs);
        }
    
        sp observer = new CodecObserver;
        IOMX::node_id node = 0;
    
        status_t err = NAME_NOT_FOUND;
        for (size_t matchIndex = 0; matchIndex < matchingCodecs.size();
                ++matchIndex) {
            componentName = matchingCodecs[matchIndex];
            quirks = MediaCodecList::getQuirksFor(componentName.c_str());
    
            pid_t tid = gettid();
            int prevPriority = androidGetThreadPriority(tid);
            androidSetThreadPriority(tid, ANDROID_PRIORITY_FOREGROUND);
            // 创建真正的解码器实例
            err = omx->allocateNode(componentName.c_str(), observer, &mCodec->mNodeBinder, &node);
            androidSetThreadPriority(tid, prevPriority);
    
            if (err == OK) {
                break;
            } else {
                ALOGW("Allocating component '%s' failed, try next one.", componentName.c_str());
            }
    
            node = 0;
        }
    
        if (node == 0) {
            if (!mime.empty()) {
                ALOGE("Unable to instantiate a %scoder for type '%s' with err %#x.",
                        encoder ? "en" : "de", mime.c_str(), err);
            } else {
                ALOGE("Unable to instantiate codec '%s' with err %#x.", componentName.c_str(), err);
            }
    
            mCodec->signalError((OMX_ERRORTYPE)err, makeNoSideEffectStatus(err));
            return false;
        }
    
        mDeathNotifier = new DeathNotifier(notify);
        if (mCodec->mNodeBinder == NULL ||
                mCodec->mNodeBinder->linkToDeath(mDeathNotifier) != OK) {
            // This was a local binder, if it dies so do we, we won't care
            // about any notifications in the afterlife.
            mDeathNotifier.clear();
        }
    
        notify = new AMessage(kWhatOMXMessageList, mCodec);
        observer->setNotificationMessage(notify);
    
        mCodec->mComponentName = componentName;
        mCodec->mRenderTracker.setComponentName(componentName);
        mCodec->mFlags = 0;
    
        if (componentName.endsWith(".secure")) {
            mCodec->mFlags |= kFlagIsSecure;
            mCodec->mFlags |= kFlagIsGrallocUsageProtected;
            mCodec->mFlags |= kFlagPushBlankBuffersToNativeWindowOnShutdown;
        }
    
        mCodec->mQuirks = quirks;
        mCodec->mOMX = omx;
        mCodec->mNode = node;
    
        {
            sp notify = mCodec->mNotify->dup();
            notify->setInt32("what", CodecBase::kWhatComponentAllocated);
            notify->setString("componentName", mCodec->mComponentName.c_str());
            notify->post();
        }
    
        // 这里设置的是ACodec的状态
        mCodec->changeState(mCodec->mLoadedState);
    
        return true;
    }
    

    这段代码中,mCodec的mOMX被初始化,而omx对象的初始化过程非常像是C/S架构,那么这里插入一段,介绍下Android 中OMX的实现手段了:

    02.png

      (1)先看OMXClient::connect()方法

    status_t OMXClient::connect() {
        sp sm = defaultServiceManager();
        sp playerbinder = sm->getService(String16("media.player"));
        sp mediaservice = interface_cast(playerbinder);
    
        if (mediaservice.get() == NULL) {
            ALOGE("Cannot obtain IMediaPlayerService");
            return NO_INIT;
        }
    
        sp mediaServerOMX = mediaservice->getOMX();
        if (mediaServerOMX.get() == NULL) {
            ALOGE("Cannot obtain mediaserver IOMX");
            return NO_INIT;
        }
    
        // If we don't want to use the codec process, and the media server OMX
        // is local, use it directly instead of going through MuxOMX
        if (!sCodecProcessEnabled &&
                mediaServerOMX->livesLocally(0 /* node */, getpid())) {
            mOMX = mediaServerOMX;
            return OK;
        }
    
        sp codecbinder = sm->getService(String16("media.codec"));
        sp codecservice = interface_cast(codecbinder);
    
        if (codecservice.get() == NULL) {
            ALOGE("Cannot obtain IMediaCodecService");
            return NO_INIT;
        }
    
        sp mediaCodecOMX = codecservice->getOMX();
        if (mediaCodecOMX.get() == NULL) {
            ALOGE("Cannot obtain mediacodec IOMX");
            return NO_INIT;
        }
    
        mOMX = new MuxOMX(mediaServerOMX, mediaCodecOMX);
    
        return OK;
    }
    

    这里做了一次IPC工作,函数通过binder机制获得到MediaPlayerService,然后通过MediaPlayerService来创建OMX的实例。这样OMXClient就获得到了OMX的入口,接下来就可以通过binder机制来获得OMX提供的服务。也就是说OMXClient 是Android中 openmax 的入口。

      可以看到这里的mediaServerOMX,mediaCodecOMX都是Binder通信的代理对象,在AOSP官网上对与OMX的支持描述如下:

    Application Framework At the application framework level is application code that utilizesandroid.media APIs to interact with the multimedia hardware. Binder IPC The Binder IPC proxies facilitate communication over process boundaries. They are located in the frameworks/av/media/libmedia directory and begin with the letter "I". Native Multimedia Framework At the native level, Android provides a multimedia framework that utilizes the Stagefright engine for audio and video recording and playback. Stagefright comes with a default list of supported software codecs and you can implement your own hardware codec by using the OpenMax integration layer standard. For more implementation details, see the MediaPlayer and Stagefright components located in frameworks/av/media. OpenMAX Integration Layer (IL) The OpenMAX IL provides a standardized way for Stagefright to recognize and use custom hardware-based multimedia codecs called components. You must provide an OpenMAX plugin in the form of a shared library namedlibstagefrighthw.so. This plugin links Stagefright with your custom codec components, which must be implemented according to the OpenMAX IL component standard.

      可以猜测,在Server端真正实现功能时,肯定是和这个libstagefrighthw.so动态链接库有关。往后看的时候再来分析这个东西。这里获取OMX对象是整个MediaCodec功能实现的核心点,后面在configure方法的分析时会重点看这个对象的获取过程。

      分析完这个函数后,我们再去看ACodec的构造函数中的一个函数:changeState(),这个函数的实现在frameworks/av/media/libstagefright/foundation/AHierarchicalStateMachine.cpp中

    void AHierarchicalStateMachine::changeState(const sp &state) {
        if (state == mState) {
            // Quick exit for the easy case.
            return;
        }
    
        Vector > A;
        sp cur = mState;
        for (;;) {
            A.push(cur);
            if (cur == NULL) {
                break;
            }
            cur = cur->parentState();
        }
    
        Vector > B;
        cur = state;
        for (;;) {
            B.push(cur);
            if (cur == NULL) {
                break;
            }
            cur = cur->parentState();
        }
    
        // Remove the common tail.
        while (A.size() > 0 && B.size() > 0 && A.top() == B.top()) {
            A.pop();
            B.pop();
        }
    
        mState = state;
    
        for (size_t i = 0; i < A.size(); ++i) {
            A.editItemAt(i)->stateExited();
        }
    
        for (size_t i = B.size(); i > 0;) {
            i--;
            B.editItemAt(i)->stateEntered();
        }
    }
    

    同时看到ACodec.h中:

    struct ACodec : public AHierarchicalStateMachine, public CodecBase {
        // .....
         // AHierarchicalStateMachine implements the message handling
        virtual void onMessageReceived(const sp &msg) {
            handleMessage(msg);
        }
    }
    

    CodecBase.h中:

    struct CodecBase : public AHandler, /* static */ ColorUtils {
    }
    

    AHierarchicalStateMachine.cpp中

    void AHierarchicalStateMachine::handleMessage(const sp &msg) {
        sp save = mState;
    
        sp cur = mState;
        while (cur != NULL && !cur->onMessageReceived(msg)) {
            // If you claim not to have handled the message you shouldn't
            // have called setState...
            CHECK(save == mState);
    
            cur = cur->parentState();
        }
    
        if (cur != NULL) {
            return;
        }
    
        ALOGW("Warning message %s unhandled in root state.",
             msg->debugString().c_str());
    }
    

    也就是说,从Handler传给ACodec的Message经过上面的逻辑都会转发到当前ACodec状态机上状态链上每一个AState::onMessageReceived()方法上,这个很关键,会对后面的MediaCodec#start(), stop(), release()等都有关系。

    也就是说,Acodec内部事实上是一个维护了由AState子类构建的状态机,我们现在已经在构造函数中看到了它changeState()到了一个mUninitializedState状态。

      从刚才的changeState()方法可以看到,如果状态相同的会被直接移去,而状态不同的状态链,原有的状态链会被逐个调用stateExited(),现在新加入的状态链会翻转过来调用stateEntered()方法,接着就来看这个UninitializedState::stateEntered()做了什么:

    void ACodec::UninitializedState::stateEntered() {
        ALOGV("Now uninitialized");
    
        if (mDeathNotifier != NULL) {
            mCodec->mNodeBinder->unlinkToDeath(mDeathNotifier);
            mDeathNotifier.clear();
        }
    
        mCodec->mUsingNativeWindow = false;
        mCodec->mNativeWindow.clear();
        mCodec->mNativeWindowUsageBits = 0;
        mCodec->mNode = 0;
        mCodec->mOMX.clear();
        mCodec->mQuirks = 0;
        mCodec->mFlags = 0;
        mCodec->mInputMetadataType = kMetadataBufferTypeInvalid;
        mCodec->mOutputMetadataType = kMetadataBufferTypeInvalid;
        mCodec->mConverter[0].clear();
        mCodec->mConverter[1].clear();
        mCodec->mComponentName.clear();
    }
    

    其实就是初始化工作,到这里MediaCodec的构造过程基本上就分析完了,其中省略了大量关于Android消息机制native的逻辑,这一块可以参考一下网上很多的文章,而且这也不是这篇文章的重点。

    2.2 configure()

      再来看一下MediaCodec.configure()方法,由于代码太多,分析下大致流程:

    (1)MediaCodec.java  
        configure(MediaFormat format,Surface surface, @Nullable MediaCrypto crypto,
                                        int flags)
        native native_configure()
    -------
    (2)android_medai_MediaCodec.cpp         
        android_media_MediaCodec_native_configure()
        JMediaCodec::configure()
    -------
    (3)MediaCodec.cpp
        configure()
        onMessageReceived()  case kWhatConfigure:
    -------
    (4)ACodec.cpp
        void ACodec::initiateConfigureComponent(const sp<AMessage> &msg)
        ACodec::LoadedState::onMessageReceived() case ACodec::kWhatConfigureComponent
        ACodec::LoadedState::onConfigureComponent()
        ACodec::configureCodec(const char *mime, const sp<AMessage> &msg)
    

    ACodec::configureCodec函数代码太长,大概有600多行,就不贴出来了,主要就是通过之前保存的IOMXNode,设置编解码的相关参数,最终会设置到编码器中,如:SoftAACEncoder2的internalSetParameter函数。

    2.3 start()

      接下来是MediaCodec.start(),也是先看一下大致流程:

    (1)MediaCodec.java  
        start()
        native native_start()
    -------
    (2)android_medai_MediaCodec.cpp         
        android_media_MediaCodec_start()
        JMediaCodec::start()
    -------
    (3)MediaCodec.cpp
        start()
        onMessageReceived()  case kWhatStart:
    -------
    (4)ACodec.cpp
        void ACodec::initiateStart()
        ACodec::LoadedState::onMessageReceived() case ACodec::kWhatStart
        ACodec::LoadedState::onStart()
        
    -------
    (5)void ACodec::LoadedState::onStart() {
        ALOGV("onStart");
    
        status_t err = mCodec->mOMXNode->sendCommand(OMX_CommandStateSet, OMX_StateIdle);
        if (err != OK) {
            mCodec->signalError(OMX_ErrorUndefined, makeNoSideEffectStatus(err));
        } else {
            mCodec->changeState(mCodec->mLoadedToIdleState);
        }
    }
    

    在最后的ACodec::LoadedState::onStart()方法中,mOMNode发送command,omx处理状态。并且ACodec状态改为mLoadedToIdleState,allocateBuffers分配buffer。

    void ACodec::LoadedToIdleState::stateEntered() {
        status_t err;
        if ((err = allocateBuffers()) != OK) {
            //。。。。。。
        }
    }
    

      OMX处理信息 :OMX_CommandStateSet ,OMX_StateIdle,会发送event通知ACodec,看看ACodec是如何处理的:

    bool ACodec::LoadedToIdleState::onOMXEvent(
            OMX_EVENTTYPE event, OMX_U32 data1, OMX_U32 data2) {
        switch (event) {
            case OMX_EventCmdComplete:
            {
                //....
    
                if (err == OK) {
                    err = mCodec->mOMXNode->sendCommand(
                        OMX_CommandStateSet, OMX_StateExecuting);
                }
    
                if (err != OK) {
                    mCodec->signalError(OMX_ErrorUndefined, makeNoSideEffectStatus(err));
                } else {
                    mCodec->changeState(mCodec->mIdleToExecutingState);
                }
    
                return true;
            }
    
        }
    }
    

    LoadedToIdleState接收到event:OMX_EventCmdComplete后,向OMX发送command OMX_StateExecuting,并且ACodec将状态切换为IdleToExecutingState。再继续下去,最终ACodec state切换为ExecutingState。

    3.MediaCodec输入缓冲区

      这节分析一下MeidaCodec获取可用输入buffer,buffer加入队列相关流程。输出和输入差不多的流程就不分析了。

    3.1 相关数据结构

    • MediaCodec.cpp
    List<size_t> mAvailPortBuffers[2];
    std::vector<BufferInfo> mPortBuffers[2];
    
    • mAvailPortBuffers:可用buffer对应的index。mAvailPortBuffers[0]为输入,mAvailPortBuffers[1]为输出。
    • mPortBuffers:所有buffer缓冲区,包括可用和已占用。mPortBuffers[0]为输入buffer,mPortBuffers[1]为输出buffer。
      这些数据缓冲区是在Mediacodec configure的时候进行的分配。

    3.2 dequeueInputBuffer()

      查看MediaCodec 输入缓冲区使用流程,这里就不在贴java层代码了,直接看MediaCodec.cpp dequeueInputBuffer。

    status_t MediaCodec::dequeueInputBuffer(size_t *index, int64_t timeoutUs) {
        sp<AMessage> msg = new AMessage(kWhatDequeueInputBuffer, this);
        msg->setInt64("timeoutUs", timeoutUs);
    
        sp<AMessage> response;
        status_t err;
        if ((err = PostAndAwaitResponse(msg, &response)) != OK) {
            return err;
        }
    
        CHECK(response->findSize("index", index));
    
        return OK;
    }
    

    这里发送AMessage消息,MediaCodec继承AHandler,通过onMessageReceived处理消息:

    void MediaCodec::onMessageReceived(const sp<AMessage> &msg) {
        switch (msg->what()) {
                case kWhatDequeueInputBuffer:
            {
                //.....
                if (handleDequeueInputBuffer(replyID, true /* new request */)) {
                    break;
                }
            }
            //.......
    }
    

    接着看handleDequeueInputBuffer:

    bool MediaCodec::handleDequeueInputBuffer(const sp<AReplyToken> &replyID, bool newRequest) {
        if (!isExecuting() || (mFlags & kFlagIsAsync)
                || (newRequest && (mFlags & kFlagDequeueInputPending))) {
            PostReplyWithError(replyID, INVALID_OPERATION);
            return true;
        } else if (mFlags & kFlagStickyError) {
            PostReplyWithError(replyID, getStickyError());
            return true;
        }
    
        ssize_t index = dequeuePortBuffer(kPortIndexInput);
    
        if (index < 0) {
            CHECK_EQ(index, -EAGAIN);
            return false;
        }
    
        sp<AMessage> response = new AMessage;
        response->setSize("index", index);
        response->postReply(replyID);
    
        return true;
    }
    
    • 检查状态及flag;
    • dequeuePortBuffer从mAvailPortBuffers获取可用输入buffer下标;
    • 将index通过response返回。

    3.3 getInputBuffer()

      MediaCodec.java 中getInputBuffer直接看jni的方法:

    static jobject android_media_MediaCodec_getBuffer(
            JNIEnv *env, jobject thiz, jboolean input, jint index) {
    
        jobject buffer;
        status_t err = codec->getBuffer(env, input, index, &buffer);
            //获取buffer,并return;
        if (err == OK) {
            return buffer;
        }
    
        return NULL;
    }
    ---------------
    status_t JMediaCodec::getBuffer(
            JNIEnv *env, bool input, size_t index, jobject *buf) const {
        sp<MediaCodecBuffer> buffer;
    
        status_t err =
            input
                ? mCodec->getInputBuffer(index, &buffer)
                : mCodec->getOutputBuffer(index, &buffer);
    
        if (err != OK) {
            return err;
        }
    
        return createByteBufferFromABuffer(
                env, !input /* readOnly */, input /* clearBuffer */, buffer, buf);
    }
    
    • 调用MediaCodec.cpp中getInputBuffer函数,获取id 对应buffer;
    • 通过MediaCodecBuffer创建ByteBuffer,实现java和native共享内存;
    • 返回ByteBuffer。

    接着看mCodec->getInputBuffer

    status_t MediaCodec::getInputBuffer(size_t index, sp<MediaCodecBuffer> *buffer) {
        sp<AMessage> format;
        return getBufferAndFormat(kPortIndexInput, index, buffer, &format);
    }
    -------------
    status_t MediaCodec::getBufferAndFormat(
            size_t portIndex, size_t index,
            sp<MediaCodecBuffer> *buffer, sp<AMessage> *format) {
            //...
        std::vector<BufferInfo> &buffers = mPortBuffers[portIndex];
        if (index >= buffers.size()) {
            return INVALID_OPERATION;
        }
    
        const BufferInfo &info = buffers[index];
        if (!info.mOwnedByClient) {
            return INVALID_OPERATION;
        }
    
        *buffer = info.mData;
        *format = info.mData->format();
    
        return OK;
    }
    
    • mPortBuffers通过index区分输入输出,这里获取所有输入BufferInfo;
    • 通过下标获取指定的BufferInfo;
    • 将buffer指针指向BufferInfo的mData。

    3.4 queueInputBuffer()

    static void android_media_MediaCodec_queueInputBuffer(
            JNIEnv *env,
            jobject thiz,
            jint index,
            jint offset,
            jint size,
            jlong timestampUs,
            jint flags) {
        sp<JMediaCodec> codec = getMediaCodec(env, thiz);
    
        if (codec == NULL) {
            throwExceptionAsNecessary(env, INVALID_OPERATION);
            return;
        }
    
        AString errorDetailMsg;
        status_t err = codec->queueInputBuffer(
                index, offset, size, timestampUs, flags, &errorDetailMsg);
        throwExceptionAsNecessary(
                env, err, ACTION_CODE_FATAL, errorDetailMsg.empty() ? NULL : errorDetailMsg.c_str());
    }
    ----------------
    status_t JMediaCodec::queueInputBuffer(
            size_t index,
            size_t offset, size_t size, int64_t timeUs, uint32_t flags,
            AString *errorDetailMsg) {
        return mCodec->queueInputBuffer(
                index, offset, size, timeUs, flags, errorDetailMsg);
    }
    

    jni方法android_media_MediaCodec_queueInputBuffer获取JMediaCodec,调用其queueInputBuffer,然后调用MediaCodec的queueInputBuffer函数:

    status_t MediaCodec::queueInputBuffer(
            size_t index,
            size_t offset,
            size_t size,
            int64_t presentationTimeUs,
            uint32_t flags,
            AString *errorDetailMsg) {
        if (errorDetailMsg != NULL) {
            errorDetailMsg->clear();
        }
    
        sp<AMessage> msg = new AMessage(kWhatQueueInputBuffer, this);
        msg->setSize("index", index);
        msg->setSize("offset", offset);
        msg->setSize("size", size);
        msg->setInt64("timeUs", presentationTimeUs);
        msg->setInt32("flags", flags);
        msg->setPointer("errorDetailMsg", errorDetailMsg);
    
        sp<AMessage> response;
        return PostAndAwaitResponse(msg, &response);
    }
    ------------------
    void MediaCodec::onMessageReceived(const sp<AMessage> &msg) {
        switch (msg->what()) {
            case kWhatQueueInputBuffer:{
                    //......
                status_t err = onQueueInputBuffer(msg);
                PostReplyWithError(replyID, err);
                break;
            }
        }
    }
    

    onMessageReceived接收消息kWhatQueueInputBuffer,调用onQueueInputBuffer函数:

    status_t MediaCodec::onQueueInputBuffer(const sp<AMessage> &msg) {
       //......
        sp<MediaCodecBuffer> buffer = info->mData;
        status_t err = OK;
        if (hasCryptoOrDescrambler()) {
            
        } else {
            err = mBufferChannel->queueInputBuffer(buffer);
        }
    
        return err;
    }
    

    调用ACodecBufferChannel queueInputBuffer函数:

    status_t ACodecBufferChannel::queueInputBuffer(const sp<MediaCodecBuffer> &buffer) {
        if (mDealer != nullptr) {
            return -ENOSYS;
        }
        std::shared_ptr<const std::vector<const BufferInfo>> array(
                std::atomic_load(&mInputBuffers));
        BufferInfoIterator it = findClientBuffer(array, buffer);
        if (it == array->end()) {
            return -ENOENT;
        }
        sp<AMessage> msg = mInputBufferFilled->dup();
        msg->setObject("buffer", it->mCodecBuffer);
        msg->setInt32("buffer-id", it->mBufferId);
        msg->post();
        return OK;
    }
    

    暂不清楚ACodecBufferChannel 的用处是什么。 发送kWhatInputBufferFilled message。

    ACodec接收到kWhatInputBufferFilled信息后续:
    void ACodec::BaseState::onInputBufferFilled(const sp<AMessage> &msg){
        //.......
             switch (mode) {
            case KEEP_BUFFERS:{}
            case RESUBMIT_BUFFERS:{
                status_t err2 = mCodec->mOMXNode->emptyBuffer(
                            bufferID, OMXBuffer::sPreset, OMX_BUFFERFLAG_EOS, 0, info->mFenceFd);
                
                     break;
            }
            case FREE_BUFFERS:
                break;
    }
    

    ACodec通知OMX,写入数据。

    相关文章

      网友评论

          本文标题:Android多媒体框架--15:MediaCodec解析

          本文链接:https://www.haomeiwen.com/subject/hsaxjdtx.html