本篇介绍
极低延时的音频通道可以显著提升用户的音频体验,比如耳返场景,极地延时可以做到用户实时听到自己的声音并及时纠正,可以提升歌手演唱效果。可见深刻理解并使用低延时对于构造音频竞争力很有价值。本篇旨在介绍下Android aaudio mmap通道。
本篇思考
在继续往下看之前,不妨先抛几个问题:
- AAudio的独占和共享模式的本质差异在哪里?
- AAudio的数据同步机制是什么?是如何做到生产消费协同的?我们知道普通的AudioTrack/AudioRecord是基于IRQ的模型,那AAudio 是什么?
- AAudio的数据链路中存在几次拷贝数据?
- 为什么AAudio 很容易出现杂音?
- AAudio 杂音问题的本质是什么?
本篇的目的就在于抽丝剥茧解释清楚如上的问题。
音频介绍
先来一张Android的系统图
image.png
搞android最好可以深刻理解该图,深刻理解该图对于Android疑难问题的攻坚很有用处。
再来几张音频播放数据流图:
image.png image.png
从这两张图可以比较清晰看到整个音频播放链路。
音频低延时通道有如下类型:
通路 | buffer大小 | 延时 | 功耗 | 性能卡顿 |
---|---|---|---|---|
aaudio | mmap方式 | 极低 | 高 | 易发生 |
ULL | 2ms | 极低 | 高 | 高概率发生 |
fast | 4ms | 低 | 高 | 优势发生 |
primary | 20ms | 中 | 中 | 较少发生 |
deep buffer | 40ms | 高 | 低 | 很难发生 |
direct/offload | 80ms | 极高 | 极低 | 极难发生 |
接下来分析下各种通道的机制。
aaudio
aaudio的mmap模式就是直接将内核alsa 驱动使用的一块内存映射到用户态,这样应用可以将数据直达驱动,节省了中途的拷贝操作,而且读写是基于NOIRQ的,也就是不需要没有用中断机制,用单独的一个计时器驱动读写数据。流程图如下:
image.png
接下来从代码层面看下aaudio的mmap实现机制, 还是从aaudio 创建采播开始:
static aaudio_result_t builder_createStream(aaudio_direction_t direction,
aaudio_sharing_mode_t /*sharingMode*/,
bool tryMMap,
android::sp<AudioStream> &stream) {
aaudio_result_t result = AAUDIO_OK;
switch (direction) {
case AAUDIO_DIRECTION_INPUT:
if (tryMMap) {
stream = new AudioStreamInternalCapture(AAudioBinderClient::getInstance(),
false);
} else {
stream = new AudioStreamRecord();
}
break;
case AAUDIO_DIRECTION_OUTPUT:
if (tryMMap) {
stream = new AudioStreamInternalPlay(AAudioBinderClient::getInstance(),
false);
} else {
stream = new AudioStreamTrack();
}
break;
default:
ALOGE("%s() bad direction = %d", __func__, direction);
result = AAUDIO_ERROR_ILLEGAL_ARGUMENT;
}
return result;
}
这儿的AudioStreamRecord和AudioStreamTrack 就是native的采播,和opensl是一样的,因此也可以这样认为,非mmap模式的aaudio就是opensl。
接下来看下mmap模式的采集:
首先看下如何打开mmap采集:
aaudio_result_t AudioStreamInternal::open(const AudioStreamBuilder &builder) {
aaudio_result_t result = AAUDIO_OK;
AAudioStreamRequest request;
AAudioStreamConfiguration configurationOutput;
if (getState() != AAUDIO_STREAM_STATE_UNINITIALIZED) {
ALOGE("%s - already open! state = %d", __func__, getState());
return AAUDIO_ERROR_INVALID_STATE;
}
// Copy requested parameters to the stream.
result = AudioStream::open(builder);
if (result < 0) {
return result;
}
const audio_format_t requestedFormat = getFormat();
// We have to do volume scaling. So we prefer FLOAT format.
if (requestedFormat == AUDIO_FORMAT_DEFAULT) {
setFormat(AUDIO_FORMAT_PCM_FLOAT);
}
// Request FLOAT for the shared mixer or the device.
request.getConfiguration().setFormat(AUDIO_FORMAT_PCM_FLOAT);
// TODO b/182392769: use attribution source util
AttributionSourceState attributionSource;
attributionSource.uid = VALUE_OR_FATAL(android::legacy2aidl_uid_t_int32_t(getuid()));
attributionSource.pid = VALUE_OR_FATAL(android::legacy2aidl_pid_t_int32_t(getpid()));
attributionSource.packageName = builder.getOpPackageName();
attributionSource.attributionTag = builder.getAttributionTag();
attributionSource.token = sp<android::BBinder>::make();
// Build the request to send to the server.
request.setAttributionSource(attributionSource);
request.setSharingModeMatchRequired(isSharingModeMatchRequired());
request.setInService(isInService());
request.getConfiguration().setDeviceId(getDeviceId());
request.getConfiguration().setSampleRate(getSampleRate());
request.getConfiguration().setDirection(getDirection());
request.getConfiguration().setSharingMode(getSharingMode());
request.getConfiguration().setChannelMask(getChannelMask());
request.getConfiguration().setUsage(getUsage());
request.getConfiguration().setContentType(getContentType());
request.getConfiguration().setSpatializationBehavior(getSpatializationBehavior());
request.getConfiguration().setIsContentSpatialized(isContentSpatialized());
request.getConfiguration().setInputPreset(getInputPreset());
request.getConfiguration().setPrivacySensitive(isPrivacySensitive());
request.getConfiguration().setBufferCapacity(builder.getBufferCapacity());
mDeviceChannelCount = getSamplesPerFrame(); // Assume it will be the same. Update if not.
mServiceStreamHandle = mServiceInterface.openStream(request, configurationOutput);
if (mServiceStreamHandle < 0
&& (request.getConfiguration().getSamplesPerFrame() == 1
|| request.getConfiguration().getChannelMask() == AAUDIO_CHANNEL_MONO)
&& getDirection() == AAUDIO_DIRECTION_OUTPUT
&& !isInService()) {
// if that failed then try switching from mono to stereo if OUTPUT.
// Only do this in the client. Otherwise we end up with a mono mixer in the service
// that writes to a stereo MMAP stream.
ALOGD("%s() - openStream() returned %d, try switching from MONO to STEREO",
__func__, mServiceStreamHandle);
request.getConfiguration().setChannelMask(AAUDIO_CHANNEL_STEREO);
mServiceStreamHandle = mServiceInterface.openStream(request, configurationOutput);
}
if (mServiceStreamHandle < 0) {
return mServiceStreamHandle;
}
// This must match the key generated in oboeservice/AAudioServiceStreamBase.cpp
// so the client can have permission to log.
if (!mInService) {
// No need to log if it is from service side.
mMetricsId = std::string(AMEDIAMETRICS_KEY_PREFIX_AUDIO_STREAM)
+ std::to_string(mServiceStreamHandle);
}
android::mediametrics::LogItem(mMetricsId)
.set(AMEDIAMETRICS_PROP_PERFORMANCEMODE,
AudioGlobal_convertPerformanceModeToText(builder.getPerformanceMode()))
.set(AMEDIAMETRICS_PROP_SHARINGMODE,
AudioGlobal_convertSharingModeToText(builder.getSharingMode()))
.set(AMEDIAMETRICS_PROP_ENCODINGCLIENT,
android::toString(requestedFormat).c_str()).record();
result = configurationOutput.validate();
if (result != AAUDIO_OK) {
goto error;
}
// Save results of the open.
if (getChannelMask() == AAUDIO_UNSPECIFIED) {
setChannelMask(configurationOutput.getChannelMask());
}
mDeviceChannelCount = configurationOutput.getSamplesPerFrame();
setSampleRate(configurationOutput.getSampleRate());
setDeviceId(configurationOutput.getDeviceId());
setSessionId(configurationOutput.getSessionId());
setSharingMode(configurationOutput.getSharingMode());
setUsage(configurationOutput.getUsage());
setContentType(configurationOutput.getContentType());
setSpatializationBehavior(configurationOutput.getSpatializationBehavior());
setIsContentSpatialized(configurationOutput.isContentSpatialized());
setInputPreset(configurationOutput.getInputPreset());
// Save device format so we can do format conversion and volume scaling together.
setDeviceFormat(configurationOutput.getFormat());
result = mServiceInterface.getStreamDescription(mServiceStreamHandle, mEndPointParcelable);
if (result != AAUDIO_OK) {
goto error;
}
// Resolve parcelable into a descriptor.
result = mEndPointParcelable.resolve(&mEndpointDescriptor);
if (result != AAUDIO_OK) {
goto error;
}
// Configure endpoint based on descriptor.
mAudioEndpoint = std::make_unique<AudioEndpoint>();
result = mAudioEndpoint->configure(&mEndpointDescriptor, getDirection());
if (result != AAUDIO_OK) {
goto error;
}
if ((result = configureDataInformation(builder.getFramesPerDataCallback())) != AAUDIO_OK) {
goto error;
}
setState(AAUDIO_STREAM_STATE_OPEN);
return result;
error:
safeReleaseClose();
return result;
}
在这儿就可以看到将业务设置的参数全部打包通过一个binder请求发给aaudioservice,由aaudioservice来处理,这儿需要注意aaudioservice实际上创建流使用的参数和请求参数不一定一样,这也是考虑到设备信息复杂,应用设置的参数不一定是最佳的,aaudioservice也会帮忙选择更加合适的参数。
接下来看下aaudioservice 打开mmap流的过程, 此时调用还在客户进程,发送binder请求到aaudioservice。
/**
* @param request info needed to create the stream
* @param configuration contains information about the created stream
* @return handle to the stream or a negative error
*/
aaudio_handle_t AAudioBinderClient::openStream(const AAudioStreamRequest &request,
AAudioStreamConfiguration &configuration) {
aaudio_handle_t stream;
for (int i = 0; i < 2; i++) {
std::shared_ptr<AAudioServiceInterface> service = getAAudioService();
if (service.get() == nullptr) return AAUDIO_ERROR_NO_SERVICE;
stream = service->openStream(request, configuration);
if (stream == AAUDIO_ERROR_NO_SERVICE) {
ALOGE("openStream lost connection to AAudioService.");
dropAAudioService(); // force a reconnect
} else {
break;
}
}
return stream;
}
aaudioservice中创建stream流程如下,此时已经通过IPC到了audioservice进程:
Status
AAudioService::openStream(const StreamRequest &_request, StreamParameters* _paramsOut,
int32_t *_aidl_return) {
static_assert(std::is_same_v<aaudio_result_t, std::decay_t<typeof(*_aidl_return)>>);
// Create wrapper objects for simple usage of the parcelables.
const AAudioStreamRequest request(_request);
AAudioStreamConfiguration paramsOut;
// A lock in is used to order the opening of endpoints when an
// EXCLUSIVE endpoint is stolen. We want the order to be:
// 1) Thread A opens exclusive MMAP endpoint
// 2) Thread B wants to open an exclusive MMAP endpoint so it steals the one from A
// under this lock.
// 3) Thread B opens a shared MMAP endpoint.
// 4) Thread A can then get the lock and also open a shared stream.
// Without the lock. Thread A might sneak in and reallocate an exclusive stream
// before B can open the shared stream.
std::unique_lock<std::recursive_mutex> lock(mOpenLock);
aaudio_result_t result = AAUDIO_OK;
sp<AAudioServiceStreamBase> serviceStream;
const AAudioStreamConfiguration &configurationInput = request.getConstantConfiguration();
bool sharingModeMatchRequired = request.isSharingModeMatchRequired();
aaudio_sharing_mode_t sharingMode = configurationInput.getSharingMode();
// Enforce limit on client processes.
AttributionSourceState attributionSource = request.getAttributionSource();
pid_t pid = IPCThreadState::self()->getCallingPid();
attributionSource.pid = VALUE_OR_RETURN_ILLEGAL_ARG_STATUS(
legacy2aidl_pid_t_int32_t(pid));
attributionSource.uid = VALUE_OR_RETURN_ILLEGAL_ARG_STATUS(
legacy2aidl_uid_t_int32_t(IPCThreadState::self()->getCallingUid()));
attributionSource.token = sp<BBinder>::make();
if (attributionSource.pid != mAudioClient.attributionSource.pid) {
int32_t count = AAudioClientTracker::getInstance().getStreamCount(pid);
if (count >= MAX_STREAMS_PER_PROCESS) {
ALOGE("openStream(): exceeded max streams per process %d >= %d",
count, MAX_STREAMS_PER_PROCESS);
AIDL_RETURN(AAUDIO_ERROR_UNAVAILABLE);
}
}
if (sharingMode != AAUDIO_SHARING_MODE_EXCLUSIVE && sharingMode != AAUDIO_SHARING_MODE_SHARED) {
ALOGE("openStream(): unrecognized sharing mode = %d", sharingMode);
AIDL_RETURN(AAUDIO_ERROR_ILLEGAL_ARGUMENT);
}
if (sharingMode == AAUDIO_SHARING_MODE_EXCLUSIVE
&& AAudioClientTracker::getInstance().isExclusiveEnabled(pid)) {
// only trust audioserver for in service indication
bool inService = false;
if (isCallerInService()) {
inService = request.isInService();
}
serviceStream = new AAudioServiceStreamMMAP(*this, inService);
result = serviceStream->open(request);
if (result != AAUDIO_OK) {
// Clear it so we can possibly fall back to using a shared stream.
ALOGW("openStream(), could not open in EXCLUSIVE mode");
serviceStream.clear();
}
}
// Try SHARED if SHARED requested or if EXCLUSIVE failed.
if (sharingMode == AAUDIO_SHARING_MODE_SHARED) {
serviceStream = new AAudioServiceStreamShared(*this);
result = serviceStream->open(request);
} else if (serviceStream.get() == nullptr && !sharingModeMatchRequired) {
aaudio::AAudioStreamRequest modifiedRequest = request;
// Overwrite the original EXCLUSIVE mode with SHARED.
modifiedRequest.getConfiguration().setSharingMode(AAUDIO_SHARING_MODE_SHARED);
serviceStream = new AAudioServiceStreamShared(*this);
result = serviceStream->open(modifiedRequest);
}
if (result != AAUDIO_OK) {
serviceStream.clear();
AIDL_RETURN(result);
} else {
aaudio_handle_t handle = mStreamTracker.addStreamForHandle(serviceStream.get());
serviceStream->setHandle(handle);
AAudioClientTracker::getInstance().registerClientStream(pid, serviceStream);
paramsOut.copyFrom(*serviceStream);
*_paramsOut = std::move(paramsOut).parcelable();
// Log open in MediaMetrics after we have the handle because we need the handle to
// create the metrics ID.
serviceStream->logOpen(handle);
ALOGV("%s(): return handle = 0x%08X", __func__, handle);
AIDL_RETURN(handle);
}
}
从这儿可以看出如下信息:
- 每个进程可以创建的aaudio mmap通道是有限的,单个进程最多8个,超了后就会失败。
- 当独占模式创建失败后,也会尝试共享模式
- aaudio通道创建成功后,audioservice内部也会持有一个代理audiostream,和客户端的audiostream对应,这样创建好的audiostream也可以支持binder调用了。
接下来继续看下独占模式的创建, 这儿我们带着一个问题,共享和独占模式的差异在哪里?
// Open stream on HAL and pass information about the shared memory buffer back to the client.
aaudio_result_t AAudioServiceStreamMMAP::open(const aaudio::AAudioStreamRequest &request) {
sp<AAudioServiceStreamMMAP> keep(this);
if (request.getConstantConfiguration().getSharingMode() != AAUDIO_SHARING_MODE_EXCLUSIVE) {
ALOGE("%s() sharingMode mismatch %d", __func__,
request.getConstantConfiguration().getSharingMode());
return AAUDIO_ERROR_INTERNAL;
}
aaudio_result_t result = AAudioServiceStreamBase::open(request);
if (result != AAUDIO_OK) {
return result;
}
sp<AAudioServiceEndpoint> endpoint = mServiceEndpointWeak.promote();
if (endpoint == nullptr) {
ALOGE("%s() has no endpoint", __func__);
return AAUDIO_ERROR_INVALID_STATE;
}
result = endpoint->registerStream(keep);
if (result != AAUDIO_OK) {
return result;
}
setState(AAUDIO_STREAM_STATE_OPEN);
return AAUDIO_OK;
}
这儿可以看到2个信息:
- open mmap通道的操作在AAudioServiceStreamBase中
- AAudioServiceEndpoint负责和底层设备通信,因此也需要持有创建成功的AAudioServiceStreamMMAP对象
接下来继续看AAudioServiceStreamBase是如何open mmap的:
aaudio_result_t AAudioServiceStreamBase::open(const aaudio::AAudioStreamRequest &request) {
AAudioEndpointManager &mEndpointManager = AAudioEndpointManager::getInstance();
aaudio_result_t result = AAUDIO_OK;
mMmapClient.attributionSource = request.getAttributionSource();
// TODO b/182392769: use attribution source util
mMmapClient.attributionSource.uid = VALUE_OR_FATAL(
legacy2aidl_uid_t_int32_t(IPCThreadState::self()->getCallingUid()));
mMmapClient.attributionSource.pid = VALUE_OR_FATAL(
legacy2aidl_pid_t_int32_t(IPCThreadState::self()->getCallingPid()));
// Limit scope of lock to avoid recursive lock in close().
{
std::lock_guard<std::mutex> lock(mUpMessageQueueLock);
if (mUpMessageQueue != nullptr) {
ALOGE("%s() called twice", __func__);
return AAUDIO_ERROR_INVALID_STATE;
}
mUpMessageQueue = std::make_shared<SharedRingBuffer>();
result = mUpMessageQueue->allocate(sizeof(AAudioServiceMessage),
QUEUE_UP_CAPACITY_COMMANDS);
if (result != AAUDIO_OK) {
goto error;
}
// This is not protected by a lock because the stream cannot be
// referenced until the service returns a handle to the client.
// So only one thread can open a stream.
mServiceEndpoint = mEndpointManager.openEndpoint(mAudioService,
request);
if (mServiceEndpoint == nullptr) {
result = AAUDIO_ERROR_UNAVAILABLE;
goto error;
}
// Save a weak pointer that we will use to access the endpoint.
mServiceEndpointWeak = mServiceEndpoint;
mFramesPerBurst = mServiceEndpoint->getFramesPerBurst();
copyFrom(*mServiceEndpoint);
}
// Make sure this object does not get deleted before the run() method
// can protect it by making a strong pointer.
mCommandQueue.startWaiting();
mThreadEnabled = true;
incStrong(nullptr); // See run() method.
result = mCommandThread.start(this);
if (result != AAUDIO_OK) {
decStrong(nullptr); // run() can't do it so we have to do it here.
goto error;
}
return result;
error:
closeAndClear();
mThreadEnabled = false;
mCommandQueue.stopWaiting();
mCommandThread.stop();
return result;
}
这儿并没有到打开mmap的地方,但是这儿也做了一些事情,如下:
- 创建aaudio消息共享内存,用来同步流状态,比如写的位置,是否发生underrun或overrun等。
- 启动mCommandQueue,这个也是aaudio的命令队列,不过不是跨进程的。
- 启动aaudio命令线程mCommandThread
这儿可能看着有点模糊,接下来挨个先看下,都不复杂。
首先是SharedRingBuffer,用来存放AAudioServiceMessage,这个就是共享内存,使用Android的ashmem就可以支持跨进程了。
aaudio_result_t SharedRingBuffer::allocate(fifo_frames_t bytesPerFrame,
fifo_frames_t capacityInFrames) {
mCapacityInFrames = capacityInFrames;
// Create shared memory large enough to hold the data and the read and write counters.
mDataMemorySizeInBytes = bytesPerFrame * capacityInFrames;
mSharedMemorySizeInBytes = mDataMemorySizeInBytes + (2 * (sizeof(fifo_counter_t)));
mFileDescriptor.reset(ashmem_create_region("AAudioSharedRingBuffer", mSharedMemorySizeInBytes));
if (mFileDescriptor.get() == -1) {
ALOGE("allocate() ashmem_create_region() failed %d", errno);
return AAUDIO_ERROR_INTERNAL;
}
ALOGV("allocate() mFileDescriptor = %d\n", mFileDescriptor.get());
int err = ashmem_set_prot_region(mFileDescriptor.get(), PROT_READ|PROT_WRITE); // TODO error handling?
if (err < 0) {
ALOGE("allocate() ashmem_set_prot_region() failed %d", errno);
mFileDescriptor.reset();
return AAUDIO_ERROR_INTERNAL; // TODO convert errno to a better AAUDIO_ERROR;
}
// Map the fd to memory addresses. Use a temporary pointer to keep the mmap result and update
// it to `mSharedMemory` only when mmap operate successfully.
uint8_t* tmpPtr = (uint8_t *) mmap(0, mSharedMemorySizeInBytes,
PROT_READ|PROT_WRITE,
MAP_SHARED,
mFileDescriptor.get(), 0);
if (tmpPtr == MAP_FAILED) {
ALOGE("allocate() mmap() failed %d", errno);
mFileDescriptor.reset();
return AAUDIO_ERROR_INTERNAL; // TODO convert errno to a better AAUDIO_ERROR;
}
mSharedMemory = tmpPtr;
// Get addresses for our counters and data from the shared memory.
fifo_counter_t *readCounterAddress =
(fifo_counter_t *) &mSharedMemory[SHARED_RINGBUFFER_READ_OFFSET];
fifo_counter_t *writeCounterAddress =
(fifo_counter_t *) &mSharedMemory[SHARED_RINGBUFFER_WRITE_OFFSET];
uint8_t *dataAddress = &mSharedMemory[SHARED_RINGBUFFER_DATA_OFFSET];
mFifoBuffer = std::make_shared<FifoBufferIndirect>(bytesPerFrame, capacityInFrames,
readCounterAddress, writeCounterAddress, dataAddress);
return AAUDIO_OK;
}
这个就是Android的匿名共享内存,之前我们提到过,ashmem本质上就是通过利用共享虚拟内存文件做到的跨进程共享内存。而binder又支持传递fd,这样就支持任意进程之间共享内存了。
接下来再看下共享内存使用的AAudioServiceCommand:
// Used to send information about the HAL to the client.
struct AAudioMessageTimestamp {
int64_t position; // number of frames transferred so far
int64_t timestamp; // time when that position was reached
};
typedef enum aaudio_service_event_e : uint32_t {
AAUDIO_SERVICE_EVENT_STARTED,
AAUDIO_SERVICE_EVENT_PAUSED,
AAUDIO_SERVICE_EVENT_STOPPED,
AAUDIO_SERVICE_EVENT_FLUSHED,
AAUDIO_SERVICE_EVENT_DISCONNECTED,
AAUDIO_SERVICE_EVENT_VOLUME,
AAUDIO_SERVICE_EVENT_XRUN
} aaudio_service_event_t;
struct AAudioMessageEvent {
aaudio_service_event_t event;
union {
// Align so that 32 and 64-bit code can exchange messages through shared memory.
alignas(8)
double dataDouble;
int64_t dataLong;
};
};
typedef struct AAudioServiceMessage_s {
enum class code : uint32_t {
NOTHING,
TIMESTAMP_SERVICE, // when frame is read or written by the service to the client
TIMESTAMP_HARDWARE, // when frame is at DAC or ADC
EVENT,
};
code what;
union {
// Align so that 32 and 64-bit code can exchange messages through shared memory.
alignas(8)
AAudioMessageTimestamp timestamp; // what == TIMESTAMP
AAudioMessageEvent event; // what == EVENT
};
} AAudioServiceMessage;
这儿看起来就是跨进程同步mmap通道情况的。
接下来再看下mCommandQueue,这个就是AAudioCommandQueue,本质上就是一个Command fifo队列再加一套生产消费机制。
mCommandThread就是类似于fwk的looper的简易版实现,让某个线程去做一些事。
这儿先简单介绍下这几个成员,一会儿再着重看,mmap机制的时间同步就是靠这几个成员实现的。
接下来继续看mmap通道的创建:
sp<AAudioServiceEndpoint> AAudioEndpointManager::openEndpoint(AAudioService &audioService,
const aaudio::AAudioStreamRequest &request) {
if (request.getConstantConfiguration().getSharingMode() == AAUDIO_SHARING_MODE_EXCLUSIVE) {
sp<AAudioServiceEndpoint> endpointToSteal;
sp<AAudioServiceEndpoint> foundEndpoint =
openExclusiveEndpoint(audioService, request, endpointToSteal);
if (endpointToSteal.get()) {
endpointToSteal->releaseRegisteredStreams(); // free the MMAP resource
}
return foundEndpoint;
} else {
return openSharedEndpoint(audioService, request);
}
}
如果是独占模式,就是通过openExclusiveEndpoint,主意这儿还有一个endpointToSteal,也就是独占通道存在被抢占的场景,因此如下的逻辑需要着重看,看看如何避免创建的mmap链路被streal
sp<AAudioServiceEndpoint> AAudioEndpointManager::openExclusiveEndpoint(
AAudioService &aaudioService,
const aaudio::AAudioStreamRequest &request,
sp<AAudioServiceEndpoint> &endpointToSteal) {
std::lock_guard<std::mutex> lock(mExclusiveLock);
const AAudioStreamConfiguration &configuration = request.getConstantConfiguration();
// Try to find an existing endpoint.
sp<AAudioServiceEndpoint> endpoint = findExclusiveEndpoint_l(configuration);
// If we find an existing one then this one cannot be exclusive.
if (endpoint.get() != nullptr) {
if (kStealingEnabled
&& !endpoint->isForSharing() // not currently SHARED
&& !request.isSharingModeMatchRequired()) { // app did not request a shared stream
ALOGD("%s() endpoint in EXCLUSIVE use. Steal it!", __func__);
mExclusiveStolenCount++;
// Prevent this process from getting another EXCLUSIVE stream.
// This will prevent two clients from colliding after a DISCONNECTION
// when they both try to open an exclusive stream at the same time.
// That can result in a stream getting disconnected between the OPEN
// and START calls. This will help preserve app compatibility.
// An app can avoid having this happen by closing their streams when
// the app is paused.
pid_t pid = VALUE_OR_FATAL(
aidl2legacy_int32_t_pid_t(request.getAttributionSource().pid));
AAudioClientTracker::getInstance().setExclusiveEnabled(pid, false);
endpointToSteal = endpoint; // return it to caller
}
return nullptr;
} else {
sp<AAudioServiceEndpointMMAP> endpointMMap = new AAudioServiceEndpointMMAP(aaudioService);
ALOGV("%s(), no match so try to open MMAP %p for dev %d",
__func__, endpointMMap.get(), configuration.getDeviceId());
endpoint = endpointMMap;
aaudio_result_t result = endpoint->open(request);
if (result != AAUDIO_OK) {
endpoint.clear();
} else {
mExclusiveStreams.push_back(endpointMMap);
mExclusiveOpenCount++;
}
}
if (endpoint.get() != nullptr) {
// Increment the reference count under this lock.
endpoint->setOpenCount(endpoint->getOpenCount() + 1);
endpoint->setForSharing(request.isSharingModeMatchRequired());
}
return endpoint;
}
可以看出如下信息:
- 先从已有的独占通道里找,看看是否有匹配的
- 如果找到已经有的匹配的通道,并且当前也不强制需要独占,那么就开始窃取这个通道,也就是当前请求从独占修改为共享,并且将一样的通道交给endpointToSteal,从上面的代码我们知道会执行释放操作releaseRegisteredStreams
- 如果没有匹配的,那就新建一个独占通道AAudioServiceEndpointMMAP。
在得出进一步结论之前,先看下查找通道时候的匹配条件:
bool AAudioServiceEndpoint::matches(const AAudioStreamConfiguration& configuration) {
if (!mConnected.load()) {
return false; // Only use an endpoint if it is connected to a device.
}
if (configuration.getDirection() != getDirection()) {
return false;
}
if (configuration.getDeviceId() != AAUDIO_UNSPECIFIED &&
configuration.getDeviceId() != getDeviceId()) {
return false;
}
if (configuration.getSessionId() != AAUDIO_SESSION_ID_ALLOCATE &&
configuration.getSessionId() != getSessionId()) {
return false;
}
if (configuration.getSampleRate() != AAUDIO_UNSPECIFIED &&
configuration.getSampleRate() != getSampleRate()) {
return false;
}
if (configuration.getSamplesPerFrame() != AAUDIO_UNSPECIFIED &&
configuration.getSamplesPerFrame() != getSamplesPerFrame()) {
return false;
}
if (configuration.getChannelMask() != AAUDIO_UNSPECIFIED &&
configuration.getChannelMask() != getChannelMask()) {
return false;
}
return true;
}
看起来,不区分进程,只要采样率和声道数一样,就很可能会一样。aaudio一般不需要sessionId,sessionId是用来关联音效的,会影响延时。
因此可以得出结论,使用独占模式比较不稳定,只要有进程创建同样采样率,声道数的通道,就会导致断开,同时也会导致请求独占模式的连接变成共享模式的连接。
如果当前没有独占模式的连接,接下来就会创建独占通道,也就是AAudioServiceEndpointMMAP,接下来看下如何打开的:
aaudio_result_t AAudioServiceEndpointMMAP::open(const aaudio::AAudioStreamRequest &request) {
aaudio_result_t result = AAUDIO_OK;
copyFrom(request.getConstantConfiguration());
mRequestedDeviceId = getDeviceId();
mMmapClient.attributionSource = request.getAttributionSource();
// TODO b/182392769: use attribution source util
mMmapClient.attributionSource.uid = VALUE_OR_FATAL(
legacy2aidl_uid_t_int32_t(IPCThreadState::self()->getCallingUid()));
mMmapClient.attributionSource.pid = VALUE_OR_FATAL(
legacy2aidl_pid_t_int32_t(IPCThreadState::self()->getCallingPid()));
audio_format_t audioFormat = getFormat();
result = openWithFormat(audioFormat);
if (result == AAUDIO_OK) return result;
if (result == AAUDIO_ERROR_UNAVAILABLE && audioFormat == AUDIO_FORMAT_PCM_FLOAT) {
ALOGD("%s() FLOAT failed, perhaps due to format. Try again with 32_BIT", __func__);
audioFormat = AUDIO_FORMAT_PCM_32_BIT;
result = openWithFormat(audioFormat);
}
if (result == AAUDIO_OK) return result;
if (result == AAUDIO_ERROR_UNAVAILABLE && audioFormat == AUDIO_FORMAT_PCM_32_BIT) {
ALOGD("%s() 32_BIT failed, perhaps due to format. Try again with 24_BIT_PACKED", __func__);
audioFormat = AUDIO_FORMAT_PCM_24_BIT_PACKED;
result = openWithFormat(audioFormat);
}
if (result == AAUDIO_OK) return result;
// TODO The HAL and AudioFlinger should be recommending a format if the open fails.
// But that recommendation is not propagating back from the HAL.
// So for now just try something very likely to work.
if (result == AAUDIO_ERROR_UNAVAILABLE && audioFormat == AUDIO_FORMAT_PCM_24_BIT_PACKED) {
ALOGD("%s() 24_BIT failed, perhaps due to format. Try again with 16_BIT", __func__);
audioFormat = AUDIO_FORMAT_PCM_16_BIT;
result = openWithFormat(audioFormat);
}
return result;
}
从这儿可以看出,使用16位深格式的数据最具有兼容性。接下来看看openWithFormat:
aaudio_result_t AAudioServiceEndpointMMAP::openWithFormat(audio_format_t audioFormat) {
aaudio_result_t result = AAUDIO_OK;
audio_config_base_t config;
audio_port_handle_t deviceId;
const audio_attributes_t attributes = getAudioAttributesFrom(this);
deviceId = mRequestedDeviceId;
// Fill in config
config.format = audioFormat;
int32_t aaudioSampleRate = getSampleRate();
if (aaudioSampleRate == AAUDIO_UNSPECIFIED) {
aaudioSampleRate = AAUDIO_SAMPLE_RATE_DEFAULT;
}
config.sample_rate = aaudioSampleRate;
const aaudio_direction_t direction = getDirection();
config.channel_mask = AAudio_getChannelMaskForOpen(
getChannelMask(), getSamplesPerFrame(), direction == AAUDIO_DIRECTION_INPUT);
if (direction == AAUDIO_DIRECTION_OUTPUT) {
mHardwareTimeOffsetNanos = OUTPUT_ESTIMATED_HARDWARE_OFFSET_NANOS; // frames at DAC later
} else if (direction == AAUDIO_DIRECTION_INPUT) {
mHardwareTimeOffsetNanos = INPUT_ESTIMATED_HARDWARE_OFFSET_NANOS; // frames at ADC earlier
} else {
ALOGE("%s() invalid direction = %d", __func__, direction);
return AAUDIO_ERROR_ILLEGAL_ARGUMENT;
}
MmapStreamInterface::stream_direction_t streamDirection =
(direction == AAUDIO_DIRECTION_OUTPUT)
? MmapStreamInterface::DIRECTION_OUTPUT
: MmapStreamInterface::DIRECTION_INPUT;
aaudio_session_id_t requestedSessionId = getSessionId();
audio_session_t sessionId = AAudioConvert_aaudioToAndroidSessionId(requestedSessionId);
// Open HAL stream. Set mMmapStream
ALOGD("%s trying to open MMAP stream with format=%#x, "
"sample_rate=%u, channel_mask=%#x, device=%d",
__func__, config.format, config.sample_rate,
config.channel_mask, deviceId);
status_t status = MmapStreamInterface::openMmapStream(streamDirection,
&attributes,
&config,
mMmapClient,
&deviceId,
&sessionId,
this, // callback
mMmapStream,
&mPortHandle);
ALOGD("%s() mMapClient.attributionSource = %s => portHandle = %d\n",
__func__, mMmapClient.attributionSource.toString().c_str(), mPortHandle);
if (status != OK) {
// This can happen if the resource is busy or the config does
// not match the hardware.
ALOGD("%s() - openMmapStream() returned status %d", __func__, status);
return AAUDIO_ERROR_UNAVAILABLE;
}
if (deviceId == AAUDIO_UNSPECIFIED) {
ALOGW("%s() - openMmapStream() failed to set deviceId", __func__);
}
setDeviceId(deviceId);
if (sessionId == AUDIO_SESSION_ALLOCATE) {
ALOGW("%s() - openMmapStream() failed to set sessionId", __func__);
}
aaudio_session_id_t actualSessionId =
(requestedSessionId == AAUDIO_SESSION_ID_NONE)
? AAUDIO_SESSION_ID_NONE
: (aaudio_session_id_t) sessionId;
setSessionId(actualSessionId);
ALOGD("%s(format = 0x%X) deviceId = %d, sessionId = %d",
__func__, audioFormat, getDeviceId(), getSessionId());
// Create MMAP/NOIRQ buffer.
result = createMmapBuffer(&mAudioDataFileDescriptor);
if (result != AAUDIO_OK) {
goto error;
}
// Get information about the stream and pass it back to the caller.
setChannelMask(AAudioConvert_androidToAAudioChannelMask(
config.channel_mask, getDirection() == AAUDIO_DIRECTION_INPUT,
AAudio_isChannelIndexMask(config.channel_mask)));
setFormat(config.format);
setSampleRate(config.sample_rate);
// If the position is not updated while the timestamp is updated for more than a certain amount,
// the timestamp reported from the HAL may not be accurate. Here, a timestamp grace period is
// set as 5 burst size. We may want to update this value if there is any report from OEMs saying
// that is too short.
static constexpr int kTimestampGraceBurstCount = 5;
mTimestampGracePeriodMs = ((int64_t) kTimestampGraceBurstCount * mFramesPerBurst
* AAUDIO_MILLIS_PER_SECOND) / getSampleRate();
ALOGD("%s() got rate = %d, channels = %d channelMask = %#x, deviceId = %d, capacity = %d\n",
__func__, getSampleRate(), getSamplesPerFrame(), getChannelMask(),
deviceId, getBufferCapacity());
ALOGD("%s() got format = 0x%X = %s, frame size = %d, burst size = %d",
__func__, getFormat(), audio_format_to_string(getFormat()),
calculateBytesPerFrame(), mFramesPerBurst);
return result;
error:
close();
// restore original requests
setDeviceId(mRequestedDeviceId);
setSessionId(requestedSessionId);
return result;
}
到了通过MmapStreamInterface创建mmap通道,并且还传递了一个回调方法,但是这儿的回调并不是数据回调,而是事件回调,我们可以看下方法声明:
static status_t openMmapStream(stream_direction_t direction,
const audio_attributes_t *attr,
audio_config_base_t *config,
const AudioClient& client,
audio_port_handle_t *deviceId,
audio_session_t *sessionId,
const sp<MmapStreamCallback>& callback,
sp<MmapStreamInterface>& interface,
audio_port_handle_t *handle);
再看下MmapStreamCallback的信息:
class MmapStreamCallback : public virtual RefBase {
public:
/**
* The mmap stream should be torn down because conditions that permitted its creation with
* the requested parameters have changed and do not allow it to operate with the requested
* constraints any more.
* \param[in] handle handle for the client stream to tear down.
*/
virtual void onTearDown(audio_port_handle_t handle) = 0;
/**
* The volume to be applied to the use case specified when opening the stream has changed
* \param[in] channels a channel mask containing all channels the volume should be applied to.
* \param[in] values the volume values to be applied to each channel. The size of the vector
* should correspond to the channel count retrieved with
* audio_channel_count_from_in_mask() or audio_channel_count_from_out_mask()
*/
virtual void onVolumeChanged(audio_channel_mask_t channels, Vector<float> values) = 0;
/**
* The device the stream is routed to/from has changed
* \param[in] onRoutingChanged the unique device ID of the new device.
*/
virtual void onRoutingChanged(audio_port_handle_t deviceId) = 0;
protected:
MmapStreamCallback() {}
virtual ~MmapStreamCallback() {}
};
这儿我们可以大概猜到,aaudio 这边会感知系统的一些事件,然后主动进行处理,比如onTearDown,系统状态变化了,那创建的流就需要关闭了,可能接触较多的是onRoutingChanged,路由发生变化,比如插拔耳机等事件,这时候aaudio也会感知到。 我们在使用aaudio时候是不是经常发现插拔下耳机,aaudio就断开连接了,那是不是onRoutingChanged中就会感知到路由变化就断开连接呢?
void AAudioServiceEndpointMMAP::onRoutingChanged(audio_port_handle_t portHandle) {
const int32_t deviceId = static_cast<int32_t>(portHandle);
ALOGD("%s() called with dev %d, old = %d", __func__, deviceId, getDeviceId());
if (getDeviceId() != deviceId) {
if (getDeviceId() != AUDIO_PORT_HANDLE_NONE) {
android::sp<AAudioServiceEndpointMMAP> holdEndpoint(this);
std::thread asyncTask([holdEndpoint, deviceId]() {
ALOGD("onRoutingChanged() asyncTask launched");
holdEndpoint->disconnectRegisteredStreams();
holdEndpoint->setDeviceId(deviceId);
});
asyncTask.detach();
} else {
setDeviceId(deviceId);
}
}
};
果然和预期一样,在路由变化时,会比对连接时候的deviceId和当前最新的deviceId,如果不一样,那就会断开当前的流。
std::vector<android::sp<AAudioServiceStreamBase>>
AAudioServiceEndpoint::disconnectRegisteredStreams() {
std::vector<android::sp<AAudioServiceStreamBase>> streamsDisconnected;
{
std::lock_guard<std::mutex> lock(mLockStreams);
mRegisteredStreams.swap(streamsDisconnected);
}
mConnected.store(false);
// We need to stop all the streams before we disconnect them.
// Otherwise there is a race condition where the first disconnected app
// tries to reopen a stream as MMAP but is blocked by the second stream,
// which hasn't stopped yet. Then the first app ends up with a Legacy stream.
for (const auto &stream : streamsDisconnected) {
ALOGD("%s() - stop(), port = %d", __func__, stream->getPortHandle());
stream->stop();
}
for (const auto &stream : streamsDisconnected) {
ALOGD("%s() - disconnect(), port = %d", __func__, stream->getPortHandle());
stream->disconnect();
}
return streamsDisconnected;
}
这下子就和实际遇到的现象对上了,做到了”知行合一”。
接下来看mmap通道的创建:
//static
__attribute__ ((visibility ("default")))
status_t MmapStreamInterface::openMmapStream(MmapStreamInterface::stream_direction_t direction,
const audio_attributes_t *attr,
audio_config_base_t *config,
const AudioClient& client,
audio_port_handle_t *deviceId,
audio_session_t *sessionId,
const sp<MmapStreamCallback>& callback,
sp<MmapStreamInterface>& interface,
audio_port_handle_t *handle)
{
// TODO: Use ServiceManager to get IAudioFlinger instead of by atomic pointer.
// This allows moving oboeservice (AAudio) to a separate process in the future.
sp<AudioFlinger> af = AudioFlinger::gAudioFlinger.load(); // either nullptr or singleton AF.
status_t ret = NO_INIT;
if (af != 0) {
ret = af->openMmapStream(
direction, attr, config, client, deviceId,
sessionId, callback, interface, handle);
}
return ret;
}
果然来到了audio最核心的服务audioflinger中了,在Android上,实际和设备交互的操作都需要经过audioflinger。甚至也可以这样总结,如果想了解Android的audio,一定要了解audioflinger,audiofligner是audio在framework中我们能完整看到的最核心一部分了,再往下就是hal,这块不同厂商策略不同,当时可以肯定的是看Android代码参考性有限了,hal 下面就是tinyalsa了,这块就是内核驱动的一个交互接口,不属于业务逻辑,再往下就是内核的audio 驱动了,越往下对能力要求会越高一些,比如看audio 驱动就需要熟悉audio 驱动的codec,platform,machine三个核心组件,同样要看懂这个,就需要了解linux的设备模型了,如果连设备模型也清楚了, 那linux 内核的驱动部分就掌握了,剩下的文件系统,调度,存储,网络等就看个人经验和学习能力了,如果这些也了解了,那就建立起了比较完整的Android体系知识了,算是真正意义上的懂Android了。以前一个同事经常觉得写Android App代码不算了解Android,刚开始接触Android时我也不太赞成,后来学习了好几年android后,深深被Android的博大精深所震撼,现在对我前同事的观点也深表赞同。
跑了一会儿小题,接下来继续看audioflinger创建mmap通道:
status_t AudioFlinger::openMmapStream(MmapStreamInterface::stream_direction_t direction,
const audio_attributes_t *attr,
audio_config_base_t *config,
const AudioClient& client,
audio_port_handle_t *deviceId,
audio_session_t *sessionId,
const sp<MmapStreamCallback>& callback,
sp<MmapStreamInterface>& interface,
audio_port_handle_t *handle)
{
status_t ret = initCheck();
if (ret != NO_ERROR) {
return ret;
}
audio_session_t actualSessionId = *sessionId;
if (actualSessionId == AUDIO_SESSION_ALLOCATE) {
actualSessionId = (audio_session_t) newAudioUniqueId(AUDIO_UNIQUE_ID_USE_SESSION);
}
audio_stream_type_t streamType = AUDIO_STREAM_DEFAULT;
audio_io_handle_t io = AUDIO_IO_HANDLE_NONE;
audio_port_handle_t portId = AUDIO_PORT_HANDLE_NONE;
audio_attributes_t localAttr = *attr;
// TODO b/182392553: refactor or make clearer
pid_t clientPid =
VALUE_OR_RETURN_STATUS(aidl2legacy_int32_t_pid_t(client.attributionSource.pid));
bool updatePid = (clientPid == (pid_t)-1);
const uid_t callingUid = IPCThreadState::self()->getCallingUid();
AttributionSourceState adjAttributionSource = client.attributionSource;
if (!isAudioServerOrMediaServerOrSystemServerOrRootUid(callingUid)) {
uid_t clientUid =
VALUE_OR_RETURN_STATUS(aidl2legacy_int32_t_uid_t(client.attributionSource.uid));
ALOGW_IF(clientUid != callingUid,
"%s uid %d tried to pass itself off as %d",
__FUNCTION__, callingUid, clientUid);
adjAttributionSource.uid = VALUE_OR_RETURN_STATUS(legacy2aidl_uid_t_int32_t(callingUid));
updatePid = true;
}
if (updatePid) {
const pid_t callingPid = IPCThreadState::self()->getCallingPid();
ALOGW_IF(clientPid != (pid_t)-1 && clientPid != callingPid,
"%s uid %d pid %d tried to pass itself off as pid %d",
__func__, callingUid, callingPid, clientPid);
adjAttributionSource.pid = VALUE_OR_RETURN_STATUS(legacy2aidl_pid_t_int32_t(callingPid));
}
adjAttributionSource = AudioFlinger::checkAttributionSourcePackage(
adjAttributionSource);
if (direction == MmapStreamInterface::DIRECTION_OUTPUT) {
audio_config_t fullConfig = AUDIO_CONFIG_INITIALIZER;
fullConfig.sample_rate = config->sample_rate;
fullConfig.channel_mask = config->channel_mask;
fullConfig.format = config->format;
std::vector<audio_io_handle_t> secondaryOutputs;
bool isSpatialized;
ret = AudioSystem::getOutputForAttr(&localAttr, &io,
actualSessionId,
&streamType, adjAttributionSource,
&fullConfig,
(audio_output_flags_t)(AUDIO_OUTPUT_FLAG_MMAP_NOIRQ |
AUDIO_OUTPUT_FLAG_DIRECT),
deviceId, &portId, &secondaryOutputs, &isSpatialized);
ALOGW_IF(!secondaryOutputs.empty(),
"%s does not support secondary outputs, ignoring them", __func__);
} else {
ret = AudioSystem::getInputForAttr(&localAttr, &io,
RECORD_RIID_INVALID,
actualSessionId,
adjAttributionSource,
config,
AUDIO_INPUT_FLAG_MMAP_NOIRQ, deviceId, &portId);
}
if (ret != NO_ERROR) {
return ret;
}
// at this stage, a MmapThread was created when openOutput() or openInput() was called by
// audio policy manager and we can retrieve it
sp<MmapThread> thread = mMmapThreads.valueFor(io);
if (thread != 0) {
interface = new MmapThreadHandle(thread);
thread->configure(&localAttr, streamType, actualSessionId, callback, *deviceId, portId);
*handle = portId;
*sessionId = actualSessionId;
config->sample_rate = thread->sampleRate();
config->channel_mask = thread->channelMask();
config->format = thread->format();
} else {
if (direction == MmapStreamInterface::DIRECTION_OUTPUT) {
AudioSystem::releaseOutput(portId);
} else {
AudioSystem::releaseInput(portId);
}
ret = NO_INIT;
}
ALOGV("%s done status %d portId %d", __FUNCTION__, ret, portId);
return ret;
}
这儿可以看到的信息如下:
- aaudio其实也会分配sessionId,只是在aaudioservser那边又把sessionId给抹掉了,避免利用sessionId添加音效影响延时,这也是我之前遇到一个问题,需要获取sessionId,发现只能拿到AudioRecord接口的,而Opensl和aaudio的就拿不到,原因就是这儿了,怕影响延时。
- 接下来从AudioSystem中获取对应的设备,这块就会走到
AudioPolicyService,又是一个小话题,因此我们这儿先不深入,后续再单独分析这块,这儿就先认为一切都顺利拿到了。 - 每个设备都有对应的MmapThread,接下来将该线程也用binder包裹下,这样也就可以支持IPC了。
这儿还需要看下configure,需要注意的是采集的mmap 线程是MmapCaptureThread,该线程继承了MmapThread,对应的播放就是MmapPlaybackThread。
AudioFlinger::MmapCaptureThread::MmapCaptureThread(
const sp<AudioFlinger>& audioFlinger, audio_io_handle_t id,
AudioHwDevice *hwDev, AudioStreamIn *input, bool systemReady)
: MmapThread(audioFlinger, id, hwDev, input->stream, systemReady, false /* isOut */),
mInput(input)
{
snprintf(mThreadName, kThreadNameLength, "AudioMmapIn_%X", id);
mChannelCount = audio_channel_count_from_in_mask(mChannelMask);
}
也就是 mmap 采集就会看到有一个”AudioMmapIn_“ 前缀的线程,如果我们用systrace就会看到。
void AudioFlinger::MmapThread::configure(const audio_attributes_t *attr,
audio_stream_type_t streamType __unused,
audio_session_t sessionId,
const sp<MmapStreamCallback>& callback,
audio_port_handle_t deviceId,
audio_port_handle_t portId)
{
mAttr = *attr;
mSessionId = sessionId;
mCallback = callback;
mDeviceId = deviceId;
mPortId = portId;
}
是不是有点失望,并没有创建mmap通道。别急,我们还有一步没看,再回到AAudioServiceEndpointMMAP::openWithFormat,我们只是看了open获取mmap 代理接口,接下来还需要创建buffer,这个buffer才是mmap 的核心, 再看下如下代码回忆下:
status_t status = MmapStreamInterface::openMmapStream(streamDirection,
&attributes,
&config,
mMmapClient,
&deviceId,
&sessionId,
this, // callback
mMmapStream,
&mPortHandle);
ALOGD("%s() mMapClient.attributionSource = %s => portHandle = %d\n",
__func__, mMmapClient.attributionSource.toString().c_str(), mPortHandle);
if (status != OK) {
// This can happen if the resource is busy or the config does
// not match the hardware.
ALOGD("%s() - openMmapStream() returned status %d", __func__, status);
return AAUDIO_ERROR_UNAVAILABLE;
}
if (deviceId == AAUDIO_UNSPECIFIED) {
ALOGW("%s() - openMmapStream() failed to set deviceId", __func__);
}
setDeviceId(deviceId);
if (sessionId == AUDIO_SESSION_ALLOCATE) {
ALOGW("%s() - openMmapStream() failed to set sessionId", __func__);
}
aaudio_session_id_t actualSessionId =
(requestedSessionId == AAUDIO_SESSION_ID_NONE)
? AAUDIO_SESSION_ID_NONE
: (aaudio_session_id_t) sessionId;
setSessionId(actualSessionId);
ALOGD("%s(format = 0x%X) deviceId = %d, sessionId = %d",
__func__, audioFormat, getDeviceId(), getSessionId());
// Create MMAP/NOIRQ buffer.
result = createMmapBuffer(&mAudioDataFileDescriptor);
if (result != AAUDIO_OK) {
goto error;
}
看到createMmapBuffer了吧,这儿的mMmapStream 就是支持binder的MmapThread。
aaudio_result_t AAudioServiceEndpointMMAP::createMmapBuffer(
android::base::unique_fd* fileDescriptor)
{
memset(&mMmapBufferinfo, 0, sizeof(struct audio_mmap_buffer_info));
int32_t minSizeFrames = getBufferCapacity();
if (minSizeFrames <= 0) { // zero will get rejected
minSizeFrames = AAUDIO_BUFFER_CAPACITY_MIN;
}
status_t status = mMmapStream->createMmapBuffer(minSizeFrames, &mMmapBufferinfo);
bool isBufferShareable = mMmapBufferinfo.flags & AUDIO_MMAP_APPLICATION_SHAREABLE;
if (status != OK) {
ALOGE("%s() - createMmapBuffer() failed with status %d %s",
__func__, status, strerror(-status));
return AAUDIO_ERROR_UNAVAILABLE;
} else {
ALOGD("%s() createMmapBuffer() buffer_size = %d fr, burst_size %d fr"
", Sharable FD: %s",
__func__,
mMmapBufferinfo.buffer_size_frames,
mMmapBufferinfo.burst_size_frames,
isBufferShareable ? "Yes" : "No");
}
setBufferCapacity(mMmapBufferinfo.buffer_size_frames);
if (!isBufferShareable) {
// Exclusive mode can only be used by the service because the FD cannot be shared.
int32_t audioServiceUid =
VALUE_OR_FATAL(legacy2aidl_uid_t_int32_t(getuid()));
if ((mMmapClient.attributionSource.uid != audioServiceUid) &&
getSharingMode() == AAUDIO_SHARING_MODE_EXCLUSIVE) {
ALOGW("%s() - exclusive FD cannot be used by client", __func__);
return AAUDIO_ERROR_UNAVAILABLE;
}
}
// AAudio creates a copy of this FD and retains ownership of the copy.
// Assume that AudioFlinger will close the original shared_memory_fd.
fileDescriptor->reset(dup(mMmapBufferinfo.shared_memory_fd));
if (fileDescriptor->get() == -1) {
ALOGE("%s() - could not dup shared_memory_fd", __func__);
return AAUDIO_ERROR_INTERNAL;
}
// Call to HAL to make sure the transport FD was able to be closed by binder.
// This is a tricky workaround for a problem in Binder.
// TODO:[b/192048842] When that problem is fixed we may be able to remove or change this code.
struct audio_mmap_position position;
mMmapStream->getMmapPosition(&position);
mFramesPerBurst = mMmapBufferinfo.burst_size_frames;
return AAUDIO_OK;
}
首先看到的是mMmapBufferinfo,这个结构如下:
typedef enum {
NONE = 0x0,
/**
* Only set this flag if applications can access the audio buffer memory
* shared with the backend (usually DSP) _without_ security issue.
*
* Setting this flag also implies that Binder will allow passing the shared memory FD
* to applications.
*
* That usually implies that the kernel will prevent any access to the
* memory surrounding the audio buffer as it could lead to a security breach.
*
* For example, a "/dev/snd/" file descriptor generally is not shareable,
* but an "anon_inode:dmabuffer" file descriptor is shareable.
* See also Linux kernel's dma_buf.
*
* This flag is required to support AAudio exclusive mode:
* See: https://source.android.com/devices/audio/aaudio
*/
AUDIO_MMAP_APPLICATION_SHAREABLE = 0x1,
} audio_mmap_buffer_flag;
/**
* Mmap buffer descriptor returned by audio_stream->create_mmap_buffer().
* note\ Used by streams opened in mmap mode.
*/
struct audio_mmap_buffer_info {
void* shared_memory_address; /**< base address of mmap memory buffer.
For use by local process only */
int32_t shared_memory_fd; /**< FD for mmap memory buffer */
int32_t buffer_size_frames; /**< total buffer size in frames */
int32_t burst_size_frames; /**< transfer size granularity in frames */
audio_mmap_buffer_flag flags; /**< Attributes describing the buffer. */
};
AUDIO_MMAP_APPLICATION_SHAREABLE就是用来让应用和内核驱动共享的标记,mmap共享内存地址是shared_memory_address, 对应的fd是shared_memory_fd,我们接下来看看这个信息是如何拿到的。
接下来就是找MmapThread创建共享内存:
status_t AudioFlinger::MmapThread::createMmapBuffer(int32_t minSizeFrames,
struct audio_mmap_buffer_info *info)
{
if (mHalStream == 0) {
return NO_INIT;
}
mStandby = true;
return mHalStream->createMmapBuffer(minSizeFrames, info);
}
接下来就是到hal层了, 首先是先到libaudiohal,这一层我们还是可以看到的,对hal层接口的统一封装:
status_t StreamHalHidl::createMmapBuffer(int32_t minSizeFrames,
struct audio_mmap_buffer_info *info) {
TIME_CHECK();
Result retval;
Return<void> ret = mStream->createMmapBuffer(
minSizeFrames,
[&](Result r, const MmapBufferInfo& hidlInfo) {
retval = r;
if (retval == Result::OK) {
const native_handle *handle = hidlInfo.sharedMemory.handle();
if (handle->numFds > 0) {
info->shared_memory_fd = handle->data[0];
#if MAJOR_VERSION >= 4
info->flags = audio_mmap_buffer_flag(hidlInfo.flags);
#endif
info->buffer_size_frames = hidlInfo.bufferSizeFrames;
// Negative buffer size frame was a hack in O and P to
// indicate that the buffer is shareable to applications
if (info->buffer_size_frames < 0) {
info->buffer_size_frames *= -1;
info->flags = audio_mmap_buffer_flag(
info->flags | AUDIO_MMAP_APPLICATION_SHAREABLE);
}
info->burst_size_frames = hidlInfo.burstSizeFrames;
// info->shared_memory_address is not needed in HIDL context
info->shared_memory_address = NULL;
} else {
retval = Result::NOT_INITIALIZED;
}
}
});
return processReturn("createMmapBuffer", ret, retval);
}
这儿就已经可以看到fd,flag,buffer_size_frames, burst_size_frames的赋值了,不过创建mmap并不在这儿。
再往下就是aosp中hal的默认实现:
Return<void> StreamIn::createMmapBuffer(int32_t minSizeFrames, createMmapBuffer_cb _hidl_cb) {
return mStreamMmap->createMmapBuffer(minSizeFrames, audio_stream_in_frame_size(mStream),
_hidl_cb);
}
还可以看到StreamMmap的实现:
template <typename T>
Return<void> StreamMmap<T>::createMmapBuffer(int32_t minSizeFrames, size_t frameSize,
IStream::createMmapBuffer_cb _hidl_cb) {
Result retval(Result::NOT_SUPPORTED);
MmapBufferInfo info;
native_handle_t* hidlHandle = nullptr;
if (mStream->create_mmap_buffer != NULL) {
if (minSizeFrames <= 0) {
retval = Result::INVALID_ARGUMENTS;
goto exit;
}
struct audio_mmap_buffer_info halInfo;
retval = Stream::analyzeStatus(
"create_mmap_buffer", mStream->create_mmap_buffer(mStream, minSizeFrames, &halInfo));
if (retval == Result::OK) {
hidlHandle = native_handle_create(1, 0);
hidlHandle->data[0] = halInfo.shared_memory_fd;
// Negative buffer size frame is a legacy hack to indicate that the buffer
// is shareable to applications before the relevant flag was introduced
bool applicationShareable =
halInfo.flags & AUDIO_MMAP_APPLICATION_SHAREABLE || halInfo.buffer_size_frames < 0;
halInfo.buffer_size_frames = abs(halInfo.buffer_size_frames);
info.sharedMemory = // hidl_memory size must always be positive
hidl_memory("audio_buffer", hidlHandle, frameSize * halInfo.buffer_size_frames);
#if MAJOR_VERSION == 2
if (applicationShareable) {
halInfo.buffer_size_frames *= -1;
}
#else
info.flags =
halInfo.flags | (applicationShareable ? MmapBufferFlag::APPLICATION_SHAREABLE
: MmapBufferFlag::NONE);
#endif
info.bufferSizeFrames = halInfo.buffer_size_frames;
info.burstSizeFrames = halInfo.burst_size_frames;
}
}
exit:
_hidl_cb(retval, info);
if (hidlHandle != nullptr) {
native_handle_delete(hidlHandle);
}
return Void();
}
StreamMmap也是一个封装,这样也是为了厂商可以灵活替换自己的实现,这就是hal层的精妙之处。
接下来假设我们用的是高通的芯片,那就还可以继续看看高通的audio hal中mmap的实现,具体路径在hardware/qcom/audio/hal/audio_hw.c:
static int adev_open_input_stream(struct audio_hw_device *dev,
audio_io_handle_t handle,
audio_devices_t devices,
struct audio_config *config,
struct audio_stream_in **stream_in,
audio_input_flags_t flags,
const char *address __unused,
audio_source_t source )
{
struct audio_device *adev = (struct audio_device *)dev;
struct stream_in *in;
int ret = 0, buffer_size, frame_size;
int channel_count;
bool is_low_latency = false;
bool is_usb_dev = audio_is_usb_in_device(devices);
bool may_use_hifi_record = adev_input_allow_hifi_record(adev,
devices,
flags,
source);
ALOGV("%s: enter: flags %#x, is_usb_dev %d, may_use_hifi_record %d,"
" sample_rate %u, channel_mask %#x, format %#x",
__func__, flags, is_usb_dev, may_use_hifi_record,
config->sample_rate, config->channel_mask, config->format);
*stream_in = NULL;
if (is_usb_dev && !is_usb_ready(adev, false /* is_playback */)) {
return -ENOSYS;
}
if (!(is_usb_dev && may_use_hifi_record)) {
if (config->sample_rate == 0)
config->sample_rate = DEFAULT_INPUT_SAMPLING_RATE;
if (config->channel_mask == AUDIO_CHANNEL_NONE)
config->channel_mask = AUDIO_CHANNEL_IN_MONO;
if (config->format == AUDIO_FORMAT_DEFAULT)
config->format = AUDIO_FORMAT_PCM_16_BIT;
channel_count = audio_channel_count_from_in_mask(config->channel_mask);
if (check_input_parameters(config->sample_rate, config->format, channel_count, false) != 0)
return -EINVAL;
}
if (audio_extn_tfa_98xx_is_supported() &&
(audio_extn_hfp_is_active(adev) || voice_is_in_call(adev)))
return -EINVAL;
in = (struct stream_in *)calloc(1, sizeof(struct stream_in));
pthread_mutex_init(&in->lock, (const pthread_mutexattr_t *) NULL);
pthread_mutex_init(&in->pre_lock, (const pthread_mutexattr_t *) NULL);
in->stream.common.get_sample_rate = in_get_sample_rate;
in->stream.common.set_sample_rate = in_set_sample_rate;
in->stream.common.get_buffer_size = in_get_buffer_size;
in->stream.common.get_channels = in_get_channels;
in->stream.common.get_format = in_get_format;
in->stream.common.set_format = in_set_format;
in->stream.common.standby = in_standby;
in->stream.common.dump = in_dump;
in->stream.common.set_parameters = in_set_parameters;
in->stream.common.get_parameters = in_get_parameters;
in->stream.common.add_audio_effect = in_add_audio_effect;
in->stream.common.remove_audio_effect = in_remove_audio_effect;
in->stream.set_gain = in_set_gain;
in->stream.read = in_read;
in->stream.get_input_frames_lost = in_get_input_frames_lost;
in->stream.get_capture_position = in_get_capture_position;
in->stream.get_active_microphones = in_get_active_microphones;
in->stream.set_microphone_direction = in_set_microphone_direction;
in->stream.set_microphone_field_dimension = in_set_microphone_field_dimension;
in->stream.update_sink_metadata = in_update_sink_metadata;
in->device = devices;
in->source = source;
in->dev = adev;
in->standby = 1;
in->capture_handle = handle;
in->flags = flags;
in->direction = MIC_DIRECTION_UNSPECIFIED;
in->zoom = 0;
in->mmap_shared_memory_fd = -1; // not open
list_init(&in->aec_list);
list_init(&in->ns_list);
ALOGV("%s: source %d, config->channel_mask %#x", __func__, source, config->channel_mask);
if (source == AUDIO_SOURCE_VOICE_UPLINK ||
source == AUDIO_SOURCE_VOICE_DOWNLINK) {
/* Force channel config requested to mono if incall
record is being requested for only uplink/downlink */
if (config->channel_mask != AUDIO_CHANNEL_IN_MONO) {
config->channel_mask = AUDIO_CHANNEL_IN_MONO;
ret = -EINVAL;
goto err_open;
}
}
if (is_usb_dev && may_use_hifi_record) {
/* HiFi record selects an appropriate format, channel, rate combo
depending on sink capabilities*/
ret = read_usb_sup_params_and_compare(false /*is_playback*/,
&config->format,
&in->supported_formats[0],
MAX_SUPPORTED_FORMATS,
&config->channel_mask,
&in->supported_channel_masks[0],
MAX_SUPPORTED_CHANNEL_MASKS,
&config->sample_rate,
&in->supported_sample_rates[0],
MAX_SUPPORTED_SAMPLE_RATES);
if (ret != 0) {
ret = -EINVAL;
goto err_open;
}
channel_count = audio_channel_count_from_in_mask(config->channel_mask);
} else if (config->format == AUDIO_FORMAT_DEFAULT) {
config->format = AUDIO_FORMAT_PCM_16_BIT;
} else if (config->format == AUDIO_FORMAT_PCM_FLOAT ||
config->format == AUDIO_FORMAT_PCM_24_BIT_PACKED ||
config->format == AUDIO_FORMAT_PCM_8_24_BIT) {
bool ret_error = false;
/* 24 bit is restricted to UNPROCESSED source only,also format supported
from HAL is 8_24
*> In case of UNPROCESSED source, for 24 bit, if format requested is other than
8_24 return error indicating supported format is 8_24
*> In case of any other source requesting 24 bit or float return error
indicating format supported is 16 bit only.
on error flinger will retry with supported format passed
*/
if (!is_supported_24bits_audiosource(source)) {
config->format = AUDIO_FORMAT_PCM_16_BIT;
ret_error = true;
} else if (config->format != AUDIO_FORMAT_PCM_8_24_BIT) {
config->format = AUDIO_FORMAT_PCM_8_24_BIT;
ret_error = true;
}
if (ret_error) {
ret = -EINVAL;
goto err_open;
}
}
in->format = config->format;
in->channel_mask = config->channel_mask;
/* Update config params with the requested sample rate and channels */
if (in->device == AUDIO_DEVICE_IN_TELEPHONY_RX) {
if (config->sample_rate == 0)
config->sample_rate = AFE_PROXY_SAMPLING_RATE;
if (config->sample_rate != 48000 && config->sample_rate != 16000 &&
config->sample_rate != 8000) {
config->sample_rate = AFE_PROXY_SAMPLING_RATE;
ret = -EINVAL;
goto err_open;
}
if (config->format != AUDIO_FORMAT_PCM_16_BIT) {
config->format = AUDIO_FORMAT_PCM_16_BIT;
ret = -EINVAL;
goto err_open;
}
in->usecase = USECASE_AUDIO_RECORD_AFE_PROXY;
in->config = pcm_config_afe_proxy_record;
in->af_period_multiplier = 1;
} else if (is_usb_dev && may_use_hifi_record) {
in->usecase = USECASE_AUDIO_RECORD_HIFI;
in->config = pcm_config_audio_capture;
frame_size = audio_stream_in_frame_size(&in->stream);
buffer_size = get_stream_buffer_size(AUDIO_CAPTURE_PERIOD_DURATION_MSEC,
config->sample_rate,
config->format,
channel_count,
false /*is_low_latency*/);
in->config.period_size = buffer_size / frame_size;
in->config.rate = config->sample_rate;
in->af_period_multiplier = 1;
in->config.format = pcm_format_from_audio_format(config->format);
} else {
in->usecase = USECASE_AUDIO_RECORD;
if (config->sample_rate == LOW_LATENCY_CAPTURE_SAMPLE_RATE &&
(in->flags & AUDIO_INPUT_FLAG_FAST) != 0) {
is_low_latency = true;
#if LOW_LATENCY_CAPTURE_USE_CASE
in->usecase = USECASE_AUDIO_RECORD_LOW_LATENCY;
#endif
in->realtime = may_use_noirq_mode(adev, in->usecase, in->flags);
if (!in->realtime) {
in->config = pcm_config_audio_capture;
frame_size = audio_stream_in_frame_size(&in->stream);
buffer_size = get_stream_buffer_size(AUDIO_CAPTURE_PERIOD_DURATION_MSEC,
config->sample_rate,
config->format,
channel_count,
is_low_latency);
in->config.period_size = buffer_size / frame_size;
in->config.rate = config->sample_rate;
in->af_period_multiplier = 1;
} else {
// period size is left untouched for rt mode playback
in->config = pcm_config_audio_capture_rt;
in->af_period_multiplier = af_period_multiplier;
}
} else if ((config->sample_rate == LOW_LATENCY_CAPTURE_SAMPLE_RATE) &&
((in->flags & AUDIO_INPUT_FLAG_MMAP_NOIRQ) != 0)) {
// FIXME: Add support for multichannel capture over USB using MMAP
in->usecase = USECASE_AUDIO_RECORD_MMAP;
in->config = pcm_config_mmap_capture;
in->stream.start = in_start;
in->stream.stop = in_stop;
in->stream.create_mmap_buffer = in_create_mmap_buffer;
in->stream.get_mmap_position = in_get_mmap_position;
in->af_period_multiplier = 1;
ALOGV("%s: USECASE_AUDIO_RECORD_MMAP", __func__);
} else if (in->source == AUDIO_SOURCE_VOICE_COMMUNICATION &&
in->flags & AUDIO_INPUT_FLAG_VOIP_TX &&
(config->sample_rate == 8000 ||
config->sample_rate == 16000 ||
config->sample_rate == 32000 ||
config->sample_rate == 48000) &&
channel_count == 1) {
in->usecase = USECASE_AUDIO_RECORD_VOIP;
in->config = pcm_config_audio_capture;
frame_size = audio_stream_in_frame_size(&in->stream);
buffer_size = get_stream_buffer_size(VOIP_CAPTURE_PERIOD_DURATION_MSEC,
config->sample_rate,
config->format,
channel_count, false /*is_low_latency*/);
in->config.period_size = buffer_size / frame_size;
in->config.period_count = VOIP_CAPTURE_PERIOD_COUNT;
in->config.rate = config->sample_rate;
in->af_period_multiplier = 1;
} else {
in->config = pcm_config_audio_capture;
frame_size = audio_stream_in_frame_size(&in->stream);
buffer_size = get_stream_buffer_size(AUDIO_CAPTURE_PERIOD_DURATION_MSEC,
config->sample_rate,
config->format,
channel_count,
is_low_latency);
in->config.period_size = buffer_size / frame_size;
in->config.rate = config->sample_rate;
in->af_period_multiplier = 1;
}
if (config->format == AUDIO_FORMAT_PCM_8_24_BIT)
in->config.format = PCM_FORMAT_S24_LE;
}
in->config.channels = channel_count;
in->sample_rate = in->config.rate;
register_format(in->format, in->supported_formats);
register_channel_mask(in->channel_mask, in->supported_channel_masks);
register_sample_rate(in->sample_rate, in->supported_sample_rates);
in->error_log = error_log_create(
ERROR_LOG_ENTRIES,
NANOS_PER_SECOND /* aggregate consecutive identical errors within one second */);
/* This stream could be for sound trigger lab,
get sound trigger pcm if present */
audio_extn_sound_trigger_check_and_get_session(in);
if (in->is_st_session)
in->flags |= AUDIO_INPUT_FLAG_HW_HOTWORD;
lock_input_stream(in);
audio_extn_snd_mon_register_listener(in, in_snd_mon_cb);
pthread_mutex_lock(&adev->lock);
in->card_status = adev->card_status;
pthread_mutex_unlock(&adev->lock);
pthread_mutex_unlock(&in->lock);
stream_app_type_cfg_init(&in->app_type_cfg);
*stream_in = &in->stream;
ALOGV("%s: exit", __func__);
return 0;
err_open:
free(in);
*stream_in = NULL;
return ret;
}
关键部分代码摘录出来如下:
else if ((config->sample_rate == LOW_LATENCY_CAPTURE_SAMPLE_RATE) &&
((in->flags & AUDIO_INPUT_FLAG_MMAP_NOIRQ) != 0)) {
// FIXME: Add support for multichannel capture over USB using MMAP
in->usecase = USECASE_AUDIO_RECORD_MMAP;
in->config = pcm_config_mmap_capture;
in->stream.start = in_start;
in->stream.stop = in_stop;
in->stream.create_mmap_buffer = in_create_mmap_buffer;
in->stream.get_mmap_position = in_get_mmap_position;
in->af_period_multiplier = 1;
ALOGV("%s: USECASE_AUDIO_RECORD_MMAP", __func__);
可以看到,如果要使用mmap,那采样率就需要是LOW_LATENCY_CAPTURE_SAMPLE_RATE,也就是48k,这个对于qcom是这样,mtk,hisi等就不确定了,这儿只能作为参考。
这时候就会使用 USECASE_AUDIO_RECORD_MMAP的usecase,create_mmap_buffer 就是in_create_mmap_buffer。
这儿可能看到usecase,这个就是描述不同种类的采播链路,包含的种类如下:
const char * const use_case_table[AUDIO_USECASE_MAX] = {
[USECASE_AUDIO_PLAYBACK_DEEP_BUFFER] = "deep-buffer-playback",
[USECASE_AUDIO_PLAYBACK_LOW_LATENCY] = "low-latency-playback",
[USECASE_AUDIO_PLAYBACK_WITH_HAPTICS] = "audio-with-haptics-playback",
[USECASE_AUDIO_PLAYBACK_HIFI] = "hifi-playback",
[USECASE_AUDIO_PLAYBACK_OFFLOAD] = "compress-offload-playback",
[USECASE_AUDIO_PLAYBACK_TTS] = "audio-tts-playback",
[USECASE_AUDIO_PLAYBACK_ULL] = "audio-ull-playback",
[USECASE_AUDIO_PLAYBACK_MMAP] = "mmap-playback",
[USECASE_AUDIO_RECORD] = "audio-record",
[USECASE_AUDIO_RECORD_LOW_LATENCY] = "low-latency-record",
[USECASE_AUDIO_RECORD_MMAP] = "mmap-record",
[USECASE_AUDIO_RECORD_HIFI] = "hifi-record",
[USECASE_AUDIO_HFP_SCO] = "hfp-sco",
[USECASE_AUDIO_HFP_SCO_WB] = "hfp-sco-wb",
[USECASE_VOICE_CALL] = "voice-call",
[USECASE_VOICE2_CALL] = "voice2-call",
[USECASE_VOLTE_CALL] = "volte-call",
[USECASE_QCHAT_CALL] = "qchat-call",
[USECASE_VOWLAN_CALL] = "vowlan-call",
[USECASE_VOICEMMODE1_CALL] = "voicemmode1-call",
[USECASE_VOICEMMODE2_CALL] = "voicemmode2-call",
[USECASE_AUDIO_SPKR_CALIB_RX] = "spkr-rx-calib",
[USECASE_AUDIO_SPKR_CALIB_TX] = "spkr-vi-record",
[USECASE_AUDIO_PLAYBACK_AFE_PROXY] = "afe-proxy-playback",
[USECASE_AUDIO_RECORD_AFE_PROXY] = "afe-proxy-record",
[USECASE_INCALL_REC_UPLINK] = "incall-rec-uplink",
[USECASE_INCALL_REC_DOWNLINK] = "incall-rec-downlink",
[USECASE_INCALL_REC_UPLINK_AND_DOWNLINK] = "incall-rec-uplink-and-downlink",
[USECASE_AUDIO_PLAYBACK_VOIP] = "audio-playback-voip",
[USECASE_AUDIO_RECORD_VOIP] = "audio-record-voip",
[USECASE_INCALL_MUSIC_UPLINK] = "incall-music-uplink",
[USECASE_INCALL_MUSIC_UPLINK2] = "incall-music-uplink2",
[USECASE_AUDIO_A2DP_ABR_FEEDBACK] = "a2dp-abr-feedback",
};
对于我们业务层,有普通模式,低延时,mmap等,对于hal层,那也有对应的,这样才能和内核驱动对应上,这个需要牢记。
接下来看下in_create_mmap_buffer:
static int in_create_mmap_buffer(const struct audio_stream_in *stream,
int32_t min_size_frames,
struct audio_mmap_buffer_info *info)
{
struct stream_in *in = (struct stream_in *)stream;
struct audio_device *adev = in->dev;
int ret = 0;
unsigned int offset1;
unsigned int frames1;
const char *step = "";
uint32_t mmap_size;
uint32_t buffer_size;
lock_input_stream(in);
pthread_mutex_lock(&adev->lock);
ALOGV("%s in %p", __func__, in);
if (info == NULL || min_size_frames <= 0 || min_size_frames > MMAP_MIN_SIZE_FRAMES_MAX) {
ALOGE("%s invalid argument info %p min_size_frames %d", __func__, info, min_size_frames);
ret = -EINVAL;
goto exit;
}
if (in->usecase != USECASE_AUDIO_RECORD_MMAP || !in->standby) {
ALOGE("%s: usecase = %d, standby = %d", __func__, in->usecase, in->standby);
ALOGV("%s in %p", __func__, in);
ret = -ENOSYS;
goto exit;
}
in->pcm_device_id = platform_get_pcm_device_id(in->usecase, PCM_CAPTURE);
if (in->pcm_device_id < 0) {
ALOGE("%s: Invalid PCM device id(%d) for the usecase(%d)",
__func__, in->pcm_device_id, in->usecase);
ret = -EINVAL;
goto exit;
}
adjust_mmap_period_count(&in->config, min_size_frames);
ALOGV("%s: Opening PCM device card_id(%d) device_id(%d), channels %d",
__func__, adev->snd_card, in->pcm_device_id, in->config.channels);
in->pcm = pcm_open(adev->snd_card, in->pcm_device_id,
(PCM_IN | PCM_MMAP | PCM_NOIRQ | PCM_MONOTONIC), &in->config);
if (in->pcm == NULL || !pcm_is_ready(in->pcm)) {
step = "open";
ret = -ENODEV;
goto exit;
}
ret = pcm_mmap_begin(in->pcm, &info->shared_memory_address, &offset1, &frames1);
if (ret < 0) {
step = "begin";
goto exit;
}
info->buffer_size_frames = pcm_get_buffer_size(in->pcm);
buffer_size = pcm_frames_to_bytes(in->pcm, info->buffer_size_frames);
info->burst_size_frames = in->config.period_size;
ret = platform_get_mmap_data_fd(adev->platform,
in->pcm_device_id, 1 /*capture*/,
&info->shared_memory_fd,
&mmap_size);
if (ret < 0) {
// Fall back to non exclusive mode
info->shared_memory_fd = pcm_get_poll_fd(in->pcm);
} else {
in->mmap_shared_memory_fd = info->shared_memory_fd; // for closing later
ALOGV("%s: opened mmap_shared_memory_fd = %d", __func__, in->mmap_shared_memory_fd);
if (mmap_size < buffer_size) {
step = "mmap";
goto exit;
}
// FIXME: indicate exclusive mode support by returning a negative buffer size
info->buffer_size_frames *= -1;
}
memset(info->shared_memory_address, 0, buffer_size);
ret = pcm_mmap_commit(in->pcm, 0, MMAP_PERIOD_SIZE);
if (ret < 0) {
step = "commit";
goto exit;
}
in->mmap_time_offset_nanos = in_get_mmap_time_offset();
in->standby = false;
ret = 0;
ALOGV("%s: got mmap buffer address %p info->buffer_size_frames %d",
__func__, info->shared_memory_address, info->buffer_size_frames);
exit:
if (ret != 0) {
if (in->pcm == NULL) {
ALOGE("%s: %s - %d", __func__, step, ret);
} else {
ALOGE("%s: %s %s", __func__, step, pcm_get_error(in->pcm));
pcm_close(in->pcm);
in->pcm = NULL;
}
}
pthread_mutex_unlock(&adev->lock);
pthread_mutex_unlock(&in->lock);
return ret;
}
可以看到通过pcm_mmap_begin获取的mmap地址,利用pcm_get_poll_fd获取了共享内存对应的fd,而这两个就到了tinyalsa层了,是不是和刚开头介绍的Android系统图对上了,一层一层往下。
我们继续看看到tinyalsa中看看:
int pcm_mmap_begin(struct pcm *pcm, void **areas, unsigned int *offset,
unsigned int *frames)
{
unsigned int continuous, copy_frames, avail;
/* return the mmap buffer */
*areas = pcm->mmap_buffer;
/* and the application offset in frames */
*offset = pcm->mmap_control->appl_ptr % pcm->buffer_size;
avail = pcm_mmap_avail(pcm);
if (avail > pcm->buffer_size)
avail = pcm->buffer_size;
continuous = pcm->buffer_size - *offset;
/* we can only copy frames if the are available and continuos */
copy_frames = *frames;
if (copy_frames > avail)
copy_frames = avail;
if (copy_frames > continuous)
copy_frames = continuous;
*frames = copy_frames;
return 0;
}
地址是从pcm_mmap_avail中拿到的:
int pcm_mmap_avail(struct pcm *pcm)
{
pcm_sync_ptr(pcm, SNDRV_PCM_SYNC_PTR_HWSYNC);
if (pcm->flags & PCM_IN) {
return (int) pcm_mmap_capture_avail(pcm);
} else {
return (int) pcm_mmap_playback_avail(pcm);
}
}
我们目前看的是采集,因此需要看pcm_mmap_capture_avail:
static inline long pcm_mmap_capture_avail(struct pcm *pcm)
{
long avail = pcm->mmap_status->hw_ptr - pcm->mmap_control->appl_ptr;
if (avail < 0) {
avail += pcm->boundary;
}
return avail;
}
那pcm->mmap_status是哪儿来的呢?还记得还hal中执行pcm_mmap_begin之前还有一个操作是pcm_open, 这个也在tinyalsa中,我们现在再来看看如何打开pcm的:
/** Opens a PCM.
* @param card The card that the pcm belongs to.
* The default card is zero.
* @param device The device that the pcm belongs to.
* The default device is zero.
* @param flags Specify characteristics and functionality about the pcm.
* May be a bitwise AND of the following:
* - @ref PCM_IN
* - @ref PCM_OUT
* - @ref PCM_MMAP
* - @ref PCM_NOIRQ
* - @ref PCM_MONOTONIC
* @param config The hardware and software parameters to open the PCM with.
* @returns A PCM structure.
* If an error occurs, the pointer of bad_pcm is returned.
* Otherwise, it returns the pointer of PCM object.
* Client code should check that the PCM opened properly by calling @ref pcm_is_ready.
* If @ref pcm_is_ready returns false, check @ref pcm_get_error for more information.
* @ingroup libtinyalsa-pcm
*/
struct pcm *pcm_open(unsigned int card, unsigned int device,
unsigned int flags, const struct pcm_config *config)
{
struct pcm *pcm;
struct snd_pcm_info info;
int rc;
pcm = calloc(1, sizeof(struct pcm));
if (!pcm) {
oops(&bad_pcm, ENOMEM, "can't allocate PCM object");
return &bad_pcm;
}
/* Default to hw_ops, attemp plugin open only if hw (/dev/snd/pcm*) open fails */
pcm->ops = &hw_ops;
pcm->fd = pcm->ops->open(card, device, flags, &pcm->data, NULL);
#ifdef TINYALSA_USES_PLUGINS
if (pcm->fd < 0) {
int pcm_type;
pcm->snd_node = snd_utils_open_pcm(card, device);
pcm_type = snd_utils_get_node_type(pcm->snd_node);
if (!pcm->snd_node || pcm_type != SND_NODE_TYPE_PLUGIN) {
oops(&bad_pcm, ENODEV, "no device (hw/plugin) for card(%u), device(%u)",
card, device);
goto fail_close_dev_node;
}
pcm->ops = &plug_ops;
pcm->fd = pcm->ops->open(card, device, flags, &pcm->data, pcm->snd_node);
}
#endif
if (pcm->fd < 0) {
oops(&bad_pcm, errno, "cannot open device (%u) for card (%u)",
device, card);
goto fail_close_dev_node;
}
pcm->flags = flags;
if (pcm->ops->ioctl(pcm->data, SNDRV_PCM_IOCTL_INFO, &info)) {
oops(&bad_pcm, errno, "cannot get info");
goto fail_close;
}
pcm->subdevice = info.subdevice;
if (pcm_set_config(pcm, config) != 0)
goto fail_close;
rc = pcm_hw_mmap_status(pcm);
if (rc < 0) {
oops(&bad_pcm, errno, "mmap status failed");
goto fail;
}
#ifdef SNDRV_PCM_IOCTL_TTSTAMP
if (pcm->flags & PCM_MONOTONIC) {
int arg = SNDRV_PCM_TSTAMP_TYPE_MONOTONIC;
rc = pcm->ops->ioctl(pcm->data, SNDRV_PCM_IOCTL_TTSTAMP, &arg);
if (rc < 0) {
oops(&bad_pcm, errno, "cannot set timestamp type");
goto fail;
}
}
#endif
pcm->xruns = 0;
return pcm;
fail:
pcm_hw_munmap_status(pcm);
if (flags & PCM_MMAP)
pcm->ops->munmap(pcm->data, pcm->mmap_buffer, pcm_frames_to_bytes(pcm, pcm->buffer_size));
fail_close:
pcm->ops->close(pcm->data);
fail_close_dev_node:
#ifdef TINYALSA_USES_PLUGINS
if (pcm->snd_node)
snd_utils_close_dev_node(pcm->snd_node);
#endif
free(pcm);
return &bad_pcm;
}
hw_ops是对声卡驱动的又一层封装, hal层中都是c代码,可是OOP的思想已经深入到骨髓里了,设计出这块结构的人是当之无愧的大牛。
const struct pcm_ops hw_ops = {
.open = pcm_hw_open,
.close = pcm_hw_close,
.ioctl = pcm_hw_ioctl,
.mmap = pcm_hw_mmap,
.munmap = pcm_hw_munmap,
.poll = pcm_hw_poll,
};
回到pcm_open, 内部有一个操作是pcm_hw_mmap_status:
static int pcm_hw_mmap_status(struct pcm *pcm)
{
if (pcm->sync_ptr)
return 0;
int page_size = sysconf(_SC_PAGE_SIZE);
pcm->mmap_status = pcm->ops->mmap(pcm->data, NULL, page_size, PROT_READ, MAP_SHARED,
SNDRV_PCM_MMAP_OFFSET_STATUS);
if (pcm->mmap_status == MAP_FAILED)
pcm->mmap_status = NULL;
if (!pcm->mmap_status)
goto mmap_error;
pcm->mmap_control = pcm->ops->mmap(pcm->data, NULL, page_size, PROT_READ | PROT_WRITE,
MAP_SHARED, SNDRV_PCM_MMAP_OFFSET_CONTROL);
if (pcm->mmap_control == MAP_FAILED)
pcm->mmap_control = NULL;
if (!pcm->mmap_control) {
pcm->ops->munmap(pcm->data, pcm->mmap_status, page_size);
pcm->mmap_status = NULL;
goto mmap_error;
}
return 0;
mmap_error:
pcm->sync_ptr = calloc(1, sizeof(*pcm->sync_ptr));
if (!pcm->sync_ptr)
return -ENOMEM;
pcm->mmap_status = &pcm->sync_ptr->s.status;
pcm->mmap_control = &pcm->sync_ptr->c.control;
return 0;
}
这儿就看到pcm->mmap_status来源于pcm->ops->mmap,通过hw_ops,可以看到mmap对应的是pcm_hw_mmap:
static void *pcm_hw_mmap(void *data, void *addr, size_t length, int prot,
int flags, off_t offset)
{
struct pcm_hw_data *hw_data = data;
return mmap(addr, length, prot, flags, hw_data->fd, offset);
}
这下看到了吧?就是mmap的地址,而fd就是声卡的内核数据节点:
static int pcm_hw_open(unsigned int card, unsigned int device,
unsigned int flags, void **data, struct snd_node *node)
{
struct pcm_hw_data *hw_data;
char fn[256];
int fd;
hw_data = calloc(1, sizeof(*hw_data));
if (!hw_data) {
return -ENOMEM;
}
snprintf(fn, sizeof(fn), "/dev/snd/pcmC%uD%u%c", card, device,
flags & PCM_IN ? 'c' : 'p');
// Open the device with non-blocking flag to avoid to be blocked in kernel when all of the
// substreams of this PCM device are opened by others.
fd = open(fn, O_RDWR | O_NONBLOCK);
if (fd < 0) {
free(hw_data);
return fd;
}
if ((flags & PCM_NONBLOCK) == 0) {
// Set the file descriptor to blocking mode.
if (fcntl(fd, F_SETFL, fcntl(fd, F_GETFL) & ~O_NONBLOCK) < 0) {
fprintf(stderr, "failed to set to blocking mode on %s", fn);
close(fd);
free(hw_data);
return -ENODEV;
}
}
hw_data->card = card;
hw_data->device = device;
hw_data->fd = fd;
hw_data->node = node;
*data = hw_data;
return fd;
}
看了下我设备上的数据节点,如下:
flame:/dev/snd # ls
comprC0D15 hwC0D103 hwC0D142 hwC0D21 hwC0D3017 hwC0D44 hwC0D72 pcmC0D11c pcmC0D17p pcmC0D24c pcmC0D35c pcmC0D42c pcmC0D9p
comprC0D28 hwC0D104 hwC0D143 hwC0D22 hwC0D3033 hwC0D45 hwC0D87 pcmC0D11p pcmC0D18p pcmC0D25c pcmC0D37c pcmC0D43p timer
comprC0D29 hwC0D11 hwC0D144 hwC0D24 hwC0D32 hwC0D46 hwC0D88 pcmC0D12c pcmC0D19c pcmC0D26c pcmC0D38c pcmC0D44p
comprC0D30 hwC0D12 hwC0D145 hwC0D25 hwC0D33 hwC0D48 hwC0D89 pcmC0D12p pcmC0D19p pcmC0D27c pcmC0D38p pcmC0D4p
comprC0D31 hwC0D13 hwC0D15 hwC0D26 hwC0D35 hwC0D49 hwC0D9 pcmC0D13c pcmC0D1c pcmC0D27p pcmC0D39c pcmC0D5c
comprC0D32 hwC0D136 hwC0D16 hwC0D27 hwC0D39 hwC0D52 pcmC0D0c pcmC0D13p pcmC0D1p pcmC0D2c pcmC0D39p pcmC0D5p
comprC0D41 hwC0D137 hwC0D189 hwC0D28 hwC0D40 hwC0D53 pcmC0D0p pcmC0D14c pcmC0D20c pcmC0D2p pcmC0D3c pcmC0D6p
comprC0D8 hwC0D14 hwC0D190 hwC0D29 hwC0D41 hwC0D55 pcmC0D100p pcmC0D16c pcmC0D21c pcmC0D33c pcmC0D3p pcmC0D7c
controlC0 hwC0D140 hwC0D2 hwC0D3 hwC0D42 hwC0D56 pcmC0D10c pcmC0D16p pcmC0D22c pcmC0D33p pcmC0D40c pcmC0D99c
hwC0D10 hwC0D141 hwC0D20 hwC0D30 hwC0D43 hwC0D71 pcmC0D10p pcmC0D17c pcmC0D23c pcmC0D34c pcmC0D40p pcmC0D9c
包罗万象,不同声卡不同设备,采集播放等都可以找到对应的数据节点。
我们普通的audiorecord其实就是经历了各种数据拷贝,各种mix,各种重采样,经历千山万水也是走到这儿交给内核驱动,而mmap 就一下子把内存从驱动共享到了aaudio甚至应用,就这么简单!是不是现在已经不知不觉深刻理解到了mmap通道的秘密?
再次回到AAudioServiceEndpointMMAP,这时候已经拿到了mmap 内存地址,fd,不过这时候还在aadudioserver中,那共享内存如何传递到应用呢?这个秘密留到下一篇揭晓。
网友评论