美文网首页
Android多媒体框架--09:start流程分析

Android多媒体框架--09:start流程分析

作者: DarcyZhou | 来源:发表于2023-04-23 08:21 被阅读0次

    "本文转载自:[yanbixing123]的 Android MultiMedia框架完全解析 - start流程分析"

    1.概述

      前面已经准备好了数据源,这里就开始是调用start()开始播放,先看一下java层该方法的实现:

    • mediaplayer.cpp
    status_t MediaPlayer::start()
    {
    ......
            mPlayer->setLooping(mLoop);
            mPlayer->setVolume(mLeftVolume, mRightVolume);
            mPlayer->setAuxEffectSendLevel(mSendLevel);
            mCurrentState = MEDIA_PLAYER_STARTED;
            ret = mPlayer->start();
    ......
        return ret;
    }
    

    核心代码就是这些,需要注意的是,这里的mPlayer是IMediaPlayer类型的,是IMediaPlayer这个匿名Binder Server的Bp端,所以最终是通过这个匿名Binder Server类来传输消息的,传输的目的地是MediaPlayerService,其他函数暂时不分析,就从最后的start函数开始分析。

    2.start()

      通过IMediaPlayer的Bp端传送到Bn端,最后到达MediaPlayerService,而MediaPlayerService为这个客户端创建了一个Client,所以最终对应的函数就是:MediaPlayerService::Client::start()。

    status_t MediaPlayerService::Client::start()
    {
        ALOGV("[%d] start", mConnId);
        sp<MediaPlayerBase> p = getPlayer();
        if (p == 0) return UNKNOWN_ERROR;
        p->setLooping(mLoop);
        return p->start();
    }
    

    这里获取到的MediaPlayerBase是NuPlayerDriver,所以最后还是调用到NuPlayerDriver的start函数:

    status_t NuPlayerDriver::start() {
        ALOGD("start(%p), state is %d, eos is %d", this, mState, mAtEOS);
        Mutex::Autolock autoLock(mLock);
    
        switch (mState) {
        case STATE_PREPARED:
            {
                mAtEOS = false;
                mPlayer->start();
    
                if (mStartupSeekTimeUs >= 0) {
                    mPlayer->seekToAsync(mStartupSeekTimeUs);
                    mStartupSeekTimeUs = -1;
                }
                break;
            }
    

    经过前面的prepare步骤,此时的状态,已经是STATE_PREPARED了,而且这里的mPlayer是NuPlayer,所以继续调用到NuPlayer中的start函数:

    void NuPlayer::start() {
        (new AMessage(kWhatStart, this))->post();
    }
    

    通过消息机制,继续传......

    NuPlayer::onMessageReceived(const sp<AMessage> &msg) 
    case kWhatStart:
            {
                ALOGV("kWhatStart");
                if (mStarted) {
                    // do not resume yet if the source is still buffering
                    if (!mPausedForBuffering) {
                        onResume();
                    }
                } else {
                    onStart();
                }
                mPausedByClient = false;
                break;
            }
    

    终于找到核心函数了,下面就仔细分析这个onStart函数:

    void NuPlayer::onStart(int64_t startPositionUs) {
        if (!mSourceStarted) {
            mSourceStarted = true;
            mSource->start();
        }
        if (startPositionUs > 0) {
            performSeek(startPositionUs);
            if (mSource->getFormat(false /* audio */) == NULL) {
                return;
            }
        }
    
        mOffloadAudio = false;
        mAudioEOS = false;
        mVideoEOS = false;
        mStarted = true;
    
        uint32_t flags = 0;
    
        if (mSource->isRealTime()) {
            flags |= Renderer::FLAG_REAL_TIME;
        }
    
        sp<MetaData> audioMeta = mSource->getFormatMeta(true /* audio */);
        audio_stream_type_t streamType = AUDIO_STREAM_MUSIC;
        if (mAudioSink != NULL) {
            streamType = mAudioSink->getAudioStreamType();
        }
    
        sp<AMessage> videoFormat = mSource->getFormat(false /* audio */);
    
        mOffloadAudio =
            canOffloadStream(audioMeta, (videoFormat != NULL), mSource->isStreaming(), streamType);
        if (mOffloadAudio) {
            flags |= Renderer::FLAG_OFFLOAD_AUDIO;
        }
    
        sp<AMessage> notify = new AMessage(kWhatRendererNotify, this);
        ++mRendererGeneration;
        notify->setInt32("generation", mRendererGeneration);
        mRenderer = new Renderer(mAudioSink, notify, flags);
        mRendererLooper = new ALooper;
        mRendererLooper->setName("NuPlayerRenderer");
        mRendererLooper->start(false, false, ANDROID_PRIORITY_AUDIO);
        mRendererLooper->registerHandler(mRenderer);
    
        status_t err = mRenderer->setPlaybackSettings(mPlaybackSettings);
        if (err != OK) {
            mSource->stop();
            mSourceStarted = false;
            notifyListener(MEDIA_ERROR, MEDIA_ERROR_UNKNOWN, err);
            return;
        }
    
        float rate = getFrameRate();
        if (rate > 0) {
            mRenderer->setVideoFrameRate(rate);
        }
    
        if (mVideoDecoder != NULL) {
            mVideoDecoder->setRenderer(mRenderer);
        }
        if (mAudioDecoder != NULL) {
            mAudioDecoder->setRenderer(mRenderer);
        }
    
        if(mVideoDecoder != NULL){
            scheduleSetVideoDecoderTime();
        }
        postScanSources();
    }
    

    下面是这部分代码主要流程:

    49096998082f7358d4c9fc4e2998e9ff_0eceb584-0a54-4f6e-9340-bfb5219630d4.png

      (1)首先来看mSource->start()函数,在之前的NuPlayer::setDataSourceAsync函数中,创建了一个GenericSource:

    sp<GenericSource> source = new GenericSource(notify, mUIDValid, mUID);
    

    然后又在NuPlayer::onMessageReceived函数的kWhatSetDataSource case中设置了NuPlayer中的mSource是创建的这个GenericSource:

    void NuPlayer::onMessageReceived(const sp<AMessage> &msg) {
        switch (msg->what()) {
            case kWhatSetDataSource:
            {
                ALOGV("kWhatSetDataSource");
    
                CHECK(mSource == NULL);
    
                status_t err = OK;
                sp<RefBase> obj;
                CHECK(msg->findObject("source", &obj));
                if (obj != NULL) {
                    mSource = static_cast<Source *>(obj.get());
    

    所以这里的mSource->start()函数最终跑到GenericSource.cpp中去执行了。

    void NuPlayer::GenericSource::start() {
        ALOGI("start");
    
        mStopRead = false;
        if (mAudioTrack.mSource != NULL) {
            postReadBuffer(MEDIA_TRACK_TYPE_AUDIO);
        }
    
        if (mVideoTrack.mSource != NULL) {
            postReadBuffer(MEDIA_TRACK_TYPE_VIDEO);
        }
    
        setDrmPlaybackStatusIfNeeded(Playback::START, getLastReadPosition() / 1000);
        mStarted = true;
    
        (new AMessage(kWhatStart, this))->post();
    }
    

    这里通过postReadBuffer函数来分别发送Video Track和Audio Track的数据,并发送了一个kWhatStart的msg。

      (2)先来看看postReadBuffer函数,这个函数中会根据不同的媒体类型来执行不同的操作。

    void NuPlayer::GenericSource::postReadBuffer(media_track_type trackType) {
        Mutex::Autolock _l(mReadBufferLock);
    
        if ((mPendingReadBufferTypes & (1 << trackType)) == 0) {
            mPendingReadBufferTypes |= (1 << trackType);
            sp<AMessage> msg = new AMessage(kWhatReadBuffer, this);
            msg->setInt32("trackType", trackType);
            msg->post();
        }
    }
    ---------------------
    void NuPlayer::GenericSource::onMessageReceived(const sp<AMessage> &msg) {
    case kWhatReadBuffer:
          {
              onReadBuffer(msg);
              break;
          }
    ---------------------
    void NuPlayer::GenericSource::onReadBuffer(sp<AMessage> msg) {
        int32_t tmpType;
        CHECK(msg->findInt32("trackType", &tmpType));
        media_track_type trackType = (media_track_type)tmpType;
        readBuffer(trackType);
        {
            // only protect the variable change, as readBuffer may
            // take considerable time.
            Mutex::Autolock _l(mReadBufferLock);
            mPendingReadBufferTypes &= ~(1 << trackType);
        }
    }
    

    又是通过一系列的转换,最终直到readBuffer函数中,这个函数根据不同的媒体类型来执行不同的操作,继续追踪:

    void NuPlayer::GenericSource::readBuffer(
            media_track_type trackType, int64_t seekTimeUs, MediaPlayerSeekMode mode,
            int64_t *actualTimeUs, bool formatChange) {
        ...
        Track *track;
        size_t maxBuffers = 1;
        switch (trackType) {// 根据track类型分配最大buffer,并初始化track
            case MEDIA_TRACK_TYPE_VIDEO:// 视频
                track = &mVideoTrack;
                maxBuffers = 8;  // too large of a number may influence seeks
                break;
            case MEDIA_TRACK_TYPE_AUDIO:// 音频
                track = &mAudioTrack;
                maxBuffers = 64;
                if (mIsByteMode) {
                    maxBuffers = 1;
                }
                break;
            case MEDIA_TRACK_TYPE_SUBTITLE:// 字幕
                track = &mSubtitleTrack;
            ...
        }
        ...
        for (size_t numBuffers = 0; numBuffers < maxBuffers; ) {
            Vector<MediaBuffer *> mediaBuffers;
            status_t err = NO_ERROR;
    
            if (couldReadMultiple) {//一般为true
                // 从文件中读取媒体数据,用于填充mediaBuffers
                err = track->mSource->readMultiple(
                        &mediaBuffers, maxBuffers - numBuffers, &options);
            } else {//read函数其实最终也是调用了readMultiple,只是read的最大buffer数为1
                MediaBuffer *mbuf = NULL;
                err = track->mSource->read(&mbuf, &options);
                if (err == OK && mbuf != NULL) {
                    mediaBuffers.push_back(mbuf);
                }
            }
    
            options.clearNonPersistent();
    
            size_t id = 0;
            size_t count = mediaBuffers.size();
            // 将所有刚才读到的MediaBuffer中的数据摘出来封装到mPackets中
            for (; id < count; ++id) {
                int64_t timeUs;
                MediaBuffer *mbuf = mediaBuffers[id];
                ...
                // 根据类型,通过mBufferingMonitor监视器更新状态
                if (trackType == MEDIA_TRACK_TYPE_AUDIO) {
                    mAudioTimeUs = timeUs;
                    mBufferingMonitor->updateQueuedTime(true /* isAudio */, timeUs);
                } else if (trackType == MEDIA_TRACK_TYPE_VIDEO) {
                    mVideoTimeUs = timeUs;
                    mBufferingMonitor->updateQueuedTime(false /* isAudio */, timeUs);
                }
    
                queueDiscontinuityIfNeeded(seeking, formatChange, trackType, track);
                // 根据类型,将MediaBuffer转换为ABuffer
                sp<ABuffer> buffer = mediaBufferToABuffer(mbuf, trackType);
                ...
                // 将buffer入队,等待解码
                track->mPackets->queueAccessUnit(buffer);
                formatChange = false;
                seeking = false;
                ++numBuffers;
            }
            ...
        }
    }
    

    最核心的函数是这个:track->mSource->read(&mbuf, &options);
    同时根据不同的track类型,track为对应的真正的track实体,以Video Track为例,track = &mVideoTrack;

      (3)再来回顾一下,在NuPlayer::GenericSource::initFromDataSource()函数中,通过外部的一个for循环,以及内部的 sp<MediaSource> track = extractor->getTrack(i); 最终在MPEG4Extractor::getTrack函数中获取到各个track,先new 一个MPEG4Extractor(这里是根据数据类型创建对应的Extractor,假设是mp4文件)来保存,然后在GenericSource.cpp中保存到mAudioTrack / mVideoTrack 以及Vector<sp<MediaSource> > mSources 这个Vector中。

        for (size_t i = 0; i < numtracks; ++i) {// 遍历轨道,将音视频轨道信息的mime添加到mMimes中
            sp<IMediaSource> track = extractor->getTrack(i);// 获取各轨道
            ...
            sp<MetaData> meta = extractor->getTrackMetaData(i);// 获取各轨道的元数据
            ......
                    mVideoTrack.mIndex = i;
                    mVideoTrack.mSource = track;
                    mVideoTrack.mPackets =
                        new AnotherPacketSource(mVideoTrack.mSource->getFormat());
                        ......
            mSources.push(track); // 将各轨道信息统一保存在保存在mSources中
    

    所以这里调用的mVideoTrack最终就是MPEG4Extractor中的MPEG4Source,而这个track->mSource->read也就对应为MPEG4Source::read()函数(MPEG4Extractor.cpp文件中):

    media_status_t MPEG4Source::read(
            MediaBufferHelper **out, const ReadOptions *options) {
        Mutex::Autolock autoLock(mLock);
    
        CHECK(mStarted);
    
        if (options != nullptr && options->getNonBlocking() && !mBufferGroup->has_buffers()) {
            *out = nullptr;
            return AMEDIA_ERROR_WOULD_BLOCK;
        }
    
        if (mFirstMoofOffset > 0) {
            return fragmentedRead(out, options);
        }
    
        *out = NULL;
    
        int64_t targetSampleTimeUs = -1;
    
        int64_t seekTimeUs;
        ReadOptions::SeekMode mode;
        if (options && options->getSeekTo(&seekTimeUs, &mode)) {
    
            if (mIsHeif) {
                CHECK(mSampleTable == NULL);
                CHECK(mItemTable != NULL);
                int32_t imageIndex;
                if (!AMediaFormat_getInt32(mFormat, AMEDIAFORMAT_KEY_TRACK_ID, &imageIndex)) {
                    return AMEDIA_ERROR_MALFORMED;
                }
    
                status_t err;
                if (seekTimeUs >= 0) {
                    err = mItemTable->findImageItem(imageIndex, &mCurrentSampleIndex);
                } else {
                    err = mItemTable->findThumbnailItem(imageIndex, &mCurrentSampleIndex);
                }
                if (err != OK) {
                    return AMEDIA_ERROR_UNKNOWN;
                }
            } else {
                uint32_t findFlags = 0;
                switch (mode) {
                    case ReadOptions::SEEK_PREVIOUS_SYNC:
                        findFlags = SampleTable::kFlagBefore;
                        break;
                    case ReadOptions::SEEK_NEXT_SYNC:
                        findFlags = SampleTable::kFlagAfter;
                        break;
                    case ReadOptions::SEEK_CLOSEST_SYNC:
                    case ReadOptions::SEEK_CLOSEST:
                        findFlags = SampleTable::kFlagClosest;
                        break;
                    case ReadOptions::SEEK_FRAME_INDEX:
                        findFlags = SampleTable::kFlagFrameIndex;
                        break;
                    default:
                        CHECK(!"Should not be here.");
                        break;
                }
                if( mode != ReadOptions::SEEK_FRAME_INDEX) {
                    seekTimeUs += ((long double)mElstShiftStartTicks * 1000000) / mTimescale;
                }
    
                uint32_t sampleIndex;
                status_t err = mSampleTable->findSampleAtTime(
                        seekTimeUs, 1000000, mTimescale,
                        &sampleIndex, findFlags);
    
                if (mode == ReadOptions::SEEK_CLOSEST
                        || mode == ReadOptions::SEEK_FRAME_INDEX) {
                    // We found the closest sample already, now we want the sync
                    // sample preceding it (or the sample itself of course), even
                    // if the subsequent sync sample is closer.
                    findFlags = SampleTable::kFlagBefore;
                }
    
                uint32_t syncSampleIndex = sampleIndex;
                // assume every audio sample is a sync sample. This works around
                // seek issues with files that were incorrectly written with an
                // empty or single-sample stss block for the audio track
                if (err == OK && !mIsAudio) {
                    err = mSampleTable->findSyncSampleNear(
                            sampleIndex, &syncSampleIndex, findFlags);
                }
    
                uint64_t sampleTime;
                if (err == OK) {
                    err = mSampleTable->getMetaDataForSample(
                            sampleIndex, NULL, NULL, &sampleTime);
                }
    
                if (err != OK) {
                    if (err == ERROR_OUT_OF_RANGE) {
                        // An attempt to seek past the end of the stream would
                        // normally cause this ERROR_OUT_OF_RANGE error. Propagating
                        // this all the way to the MediaPlayer would cause abnormal
                        // termination. Legacy behaviour appears to be to behave as if
                        // we had seeked to the end of stream, ending normally.
                        return AMEDIA_ERROR_END_OF_STREAM;
                    }
                    ALOGV("end of stream");
                    return AMEDIA_ERROR_UNKNOWN;
                }
    
                if (mode == ReadOptions::SEEK_CLOSEST
                    || mode == ReadOptions::SEEK_FRAME_INDEX) {
                    sampleTime -= mElstShiftStartTicks;
                    targetSampleTimeUs = (sampleTime * 1000000ll) / mTimescale;
                }
    
    #if 0
                uint32_t syncSampleTime;
                CHECK_EQ(OK, mSampleTable->getMetaDataForSample(
                            syncSampleIndex, NULL, NULL, &syncSampleTime));
    
                ALOGI("seek to time %lld us => sample at time %lld us, "
                     "sync sample at time %lld us",
                     seekTimeUs,
                     sampleTime * 1000000ll / mTimescale,
                     syncSampleTime * 1000000ll / mTimescale);
    #endif
    
                mCurrentSampleIndex = syncSampleIndex;
            }
    
            if (mBuffer != NULL) {
                mBuffer->release();
                mBuffer = NULL;
            }
    
            // fall through
        }
    
        off64_t offset = 0;
        size_t size = 0;
        uint64_t cts, stts;
        bool isSyncSample;
        bool newBuffer = false;
        if (mBuffer == NULL) {
            newBuffer = true;
    
            status_t err;
            if (!mIsHeif) {
                err = mSampleTable->getMetaDataForSample(
                        mCurrentSampleIndex, &offset, &size, &cts, &isSyncSample, &stts);
                if(err == OK) {
                    /* Composition Time Stamp cannot be negative. Some files have video Sample
                    * Time(STTS)delta with zero value(b/117402420).  Hence subtract only
                    * min(cts, mElstShiftStartTicks), so that audio tracks can be played.
                    */
                    cts -= std::min(cts, mElstShiftStartTicks);
                }
    
            } else {
                err = mItemTable->getImageOffsetAndSize(
                        options && options->getSeekTo(&seekTimeUs, &mode) ?
                                &mCurrentSampleIndex : NULL, &offset, &size);
    
                cts = stts = 0;
                isSyncSample = 0;
                ALOGV("image offset %lld, size %zu", (long long)offset, size);
            }
    
            if (err != OK) {
                if (err == ERROR_END_OF_STREAM) {
                    return AMEDIA_ERROR_END_OF_STREAM;
                }
                return AMEDIA_ERROR_UNKNOWN;
            }
    
            err = mBufferGroup->acquire_buffer(&mBuffer);
    
            if (err != OK) {
                CHECK(mBuffer == NULL);
                return AMEDIA_ERROR_UNKNOWN;
            }
            if (size > mBuffer->size()) {
                ALOGE("buffer too small: %zu > %zu", size, mBuffer->size());
                mBuffer->release();
                mBuffer = NULL;
                return AMEDIA_ERROR_UNKNOWN; // ERROR_BUFFER_TOO_SMALL
            }
        }
    
        if (!mIsAVC && !mIsHEVC && !mIsAC4) {
            if (newBuffer) {
                if (mIsPcm) {
                    // The twos' PCM block reader assumes that all samples has the same size.
    
                    uint32_t samplesToRead = mSampleTable->getLastSampleIndexInChunk()
                                                          - mCurrentSampleIndex + 1;
                    if (samplesToRead > kMaxPcmFrameSize) {
                        samplesToRead = kMaxPcmFrameSize;
                    }
    
                    ALOGV("Reading %d PCM frames of size %zu at index %d to stop of chunk at %d",
                          samplesToRead, size, mCurrentSampleIndex,
                          mSampleTable->getLastSampleIndexInChunk());
    
                   size_t totalSize = samplesToRead * size;
                    uint8_t* buf = (uint8_t *)mBuffer->data();
                    ssize_t bytesRead = mDataSource->readAt(offset, buf, totalSize);
                    if (bytesRead < (ssize_t)totalSize) {
                        mBuffer->release();
                        mBuffer = NULL;
    
                        return AMEDIA_ERROR_IO;
                    }
    
                    AMediaFormat *meta = mBuffer->meta_data();
                    AMediaFormat_clear(meta);
                    AMediaFormat_setInt64(
                          meta, AMEDIAFORMAT_KEY_TIME_US, ((long double)cts * 1000000) / mTimescale);
                    AMediaFormat_setInt32(meta, AMEDIAFORMAT_KEY_IS_SYNC_FRAME, 1);
    
                    int32_t byteOrder;
                    AMediaFormat_getInt32(mFormat,
                            AMEDIAFORMAT_KEY_PCM_BIG_ENDIAN, &byteOrder);
    
                    if (byteOrder == 1) {
                        // Big-endian -> little-endian
                        uint16_t *dstData = (uint16_t *)buf;
                        uint16_t *srcData = (uint16_t *)buf;
    
                        for (size_t j = 0; j < bytesRead / sizeof(uint16_t); j++) {
                             dstData[j] = ntohs(srcData[j]);
                        }
                    }
    
                    mCurrentSampleIndex += samplesToRead;
                    mBuffer->set_range(0, totalSize);
                } else {
                    ssize_t num_bytes_read =
                        mDataSource->readAt(offset, (uint8_t *)mBuffer->data(), size);
    
                    if (num_bytes_read < (ssize_t)size) {
                        mBuffer->release();
                        mBuffer = NULL;
    
                        return AMEDIA_ERROR_IO;
                    }
    
                    CHECK(mBuffer != NULL);
                    mBuffer->set_range(0, size);
                    AMediaFormat *meta = mBuffer->meta_data();
                    AMediaFormat_clear(meta);
                    AMediaFormat_setInt64(
                            meta, AMEDIAFORMAT_KEY_TIME_US, ((long double)cts * 1000000) / mTimescale);
                    AMediaFormat_setInt64(
                            meta, AMEDIAFORMAT_KEY_DURATION, ((long double)stts * 1000000) / mTimescale);
    
                    if (targetSampleTimeUs >= 0) {
                        AMediaFormat_setInt64(
                                meta, AMEDIAFORMAT_KEY_TARGET_TIME, targetSampleTimeUs);
                    }
    
                    if (isSyncSample) {
                        AMediaFormat_setInt32(meta, AMEDIAFORMAT_KEY_IS_SYNC_FRAME, 1);
                    }
    
                    ++mCurrentSampleIndex;
                }
            }
    
            *out = mBuffer;
            mBuffer = NULL;
    
            return AMEDIA_OK;
    
        } else if (mIsAC4) {
            CHECK(mBuffer != NULL);
            // Make sure there is enough space to write the sync header and the raw frame
            if (mBuffer->range_length() < (7 + size)) {
                mBuffer->release();
                mBuffer = NULL;
    
                return AMEDIA_ERROR_IO;
            }
    
            uint8_t *dstData = (uint8_t *)mBuffer->data();
            size_t dstOffset = 0;
            // Add AC-4 sync header to MPEG4 encapsulated AC-4 raw frame
            // AC40 sync word, meaning no CRC at the end of the frame
            dstData[dstOffset++] = 0xAC;
            dstData[dstOffset++] = 0x40;
            dstData[dstOffset++] = 0xFF;
            dstData[dstOffset++] = 0xFF;
            dstData[dstOffset++] = (uint8_t)((size >> 16) & 0xFF);
            dstData[dstOffset++] = (uint8_t)((size >> 8) & 0xFF);
            dstData[dstOffset++] = (uint8_t)((size >> 0) & 0xFF);
    
            ssize_t numBytesRead = mDataSource->readAt(offset, dstData + dstOffset, size);
            if (numBytesRead != (ssize_t)size) {
                mBuffer->release();
                mBuffer = NULL;
    
                return AMEDIA_ERROR_IO;
            }
    
            mBuffer->set_range(0, dstOffset + size);
            AMediaFormat *meta = mBuffer->meta_data();
            AMediaFormat_clear(meta);
            AMediaFormat_setInt64(
                    meta, AMEDIAFORMAT_KEY_TIME_US, ((long double)cts * 1000000) / mTimescale);
            AMediaFormat_setInt64(
                    meta, AMEDIAFORMAT_KEY_DURATION, ((long double)stts * 1000000) / mTimescale);
    
            if (targetSampleTimeUs >= 0) {
                AMediaFormat_setInt64(
                        meta, AMEDIAFORMAT_KEY_TARGET_TIME, targetSampleTimeUs);
            }
    
            if (isSyncSample) {
                AMediaFormat_setInt32(meta, AMEDIAFORMAT_KEY_IS_SYNC_FRAME, 1);
            }
    
            ++mCurrentSampleIndex;
    
            *out = mBuffer;
            mBuffer = NULL;
    
            return AMEDIA_OK;
        } else {
            // Whole NAL units are returned but each fragment is prefixed by
            // the start code (0x00 00 00 01).
            ssize_t num_bytes_read = 0;
            num_bytes_read = mDataSource->readAt(offset, mSrcBuffer, size);
    
            if (num_bytes_read < (ssize_t)size) {
                mBuffer->release();
                mBuffer = NULL;
    
                return AMEDIA_ERROR_IO;
            }
    
            uint8_t *dstData = (uint8_t *)mBuffer->data();
            size_t srcOffset = 0;
            size_t dstOffset = 0;
    
            while (srcOffset < size) {
                bool isMalFormed = !isInRange((size_t)0u, size, srcOffset, mNALLengthSize);
                size_t nalLength = 0;
                if (!isMalFormed) {
                    nalLength = parseNALSize(&mSrcBuffer[srcOffset]);
                    srcOffset += mNALLengthSize;
                    isMalFormed = !isInRange((size_t)0u, size, srcOffset, nalLength);
                }
    
                if (isMalFormed) {
                    //if nallength abnormal,ignore it.
                    ALOGW("abnormal nallength, ignore this NAL");
                    srcOffset = size;
                    break;
                }
    
                if (nalLength == 0) {
                    continue;
                }
    
                if (dstOffset > SIZE_MAX - 4 ||
                        dstOffset + 4 > SIZE_MAX - nalLength ||
                        dstOffset + 4 + nalLength > mBuffer->size()) {
                    ALOGE("b/27208621 : %zu %zu", dstOffset, mBuffer->size());
                    android_errorWriteLog(0x534e4554, "27208621");
                    mBuffer->release();
                    mBuffer = NULL;
                    return AMEDIA_ERROR_MALFORMED;
                }
    
                dstData[dstOffset++] = 0;
                dstData[dstOffset++] = 0;
                dstData[dstOffset++] = 0;
                dstData[dstOffset++] = 1;
                memcpy(&dstData[dstOffset], &mSrcBuffer[srcOffset], nalLength);
                srcOffset += nalLength;
                dstOffset += nalLength;
            }
            CHECK_EQ(srcOffset, size);
            CHECK(mBuffer != NULL);
            mBuffer->set_range(0, dstOffset);
    
            AMediaFormat *meta = mBuffer->meta_data();
            AMediaFormat_clear(meta);
            AMediaFormat_setInt64(
                    meta, AMEDIAFORMAT_KEY_TIME_US, ((long double)cts * 1000000) / mTimescale);
            AMediaFormat_setInt64(
                    meta, AMEDIAFORMAT_KEY_DURATION, ((long double)stts * 1000000) / mTimescale);
    
            if (targetSampleTimeUs >= 0) {
                AMediaFormat_setInt64(
                        meta, AMEDIAFORMAT_KEY_TARGET_TIME, targetSampleTimeUs);
            }
    
            if (mIsAVC) {
                uint32_t layerId = FindAVCLayerId(
                        (const uint8_t *)mBuffer->data(), mBuffer->range_length());
                AMediaFormat_setInt32(meta, AMEDIAFORMAT_KEY_TEMPORAL_LAYER_ID, layerId);
            } else if (mIsHEVC) {
                int32_t layerId = parseHEVCLayerId(
                        (const uint8_t *)mBuffer->data(), mBuffer->range_length());
                if (layerId >= 0) {
                    AMediaFormat_setInt32(meta, AMEDIAFORMAT_KEY_TEMPORAL_LAYER_ID, layerId);
                }
            }
    
            if (isSyncSample) {
                AMediaFormat_setInt32(meta, AMEDIAFORMAT_KEY_IS_SYNC_FRAME, 1);
            }
    
            ++mCurrentSampleIndex;
    
            *out = mBuffer;
            mBuffer = NULL;
    
            return AMEDIA_OK;
        }
    }
    

    这个read函数太复杂了,主要是把获得的这帧数据保存在*out中,传递到函数外面。这里就不再深究了,感兴趣的可以自己去了解一下。

      在NuPlayer::GenericSource::readBuffer函数中,设置了不同Track类型需要读取的最大Buffer的数量,AudioBuffer为64个,VideoBuffer为4个。这里再贴一下代码:

    void NuPlayer::GenericSource::readBuffer(
            media_track_type trackType, int64_t seekTimeUs, MediaPlayerSeekMode mode,
            int64_t *actualTimeUs, bool formatChange) {
        ...
        Track *track;
        size_t maxBuffers = 1;
        switch (trackType) {// 根据track类型分配最大buffer,并初始化track
            case MEDIA_TRACK_TYPE_VIDEO:// 视频
                track = &mVideoTrack;
                maxBuffers = 8;  // too large of a number may influence seeks
                break;
            case MEDIA_TRACK_TYPE_AUDIO:// 音频
                track = &mAudioTrack;
                maxBuffers = 64;
                if (mIsByteMode) {
                    maxBuffers = 1;
                }
                break;
            case MEDIA_TRACK_TYPE_SUBTITLE:// 字幕
                track = &mSubtitleTrack;
            ...
        }
    

      退出MPEG4Source::read函数,退回到NuPlayer::GenericSource::readBuffer函数中,当读取完所需的buffer后,如果执行了formatChange / seeking 操作的话,就会调用 queueDiscontinuityIfNeeded函数来不连续的Queue Buffer。

      同时NuPlayer::GenericSource::start()函数中还发送了一个kWhatStart的msg,这个msg会通过NuPlayer::GenericSource::BufferingMonitor来维护着GenericSource的整个Buffer流程。

      最后,返回到NuPlayer::GenericSource::readBuffer时,会将MPEG4Source读取到的数据通过queueAccessUnit()方法保存到buffer队列中,等待解码。

    // 将buffer入队,等待解码
    track->mPackets->queueAccessUnit(buffer);
    

    相关文章

      网友评论

          本文标题:Android多媒体框架--09:start流程分析

          本文链接:https://www.haomeiwen.com/subject/qpfiddtx.html