黄色部分是APP可以控制的部分
因此,采集麦克风数据时设置格式时,总是element1
与output scope
相结合,
使用扬声器播放设置格式时,总是element0
和intput scope
相结合,
由于kAudioOutputUnitProperty_EnableIO
是设置的kAudioUnitScope_Output
,表明IO是与扬声器相连接(输出),因此总线标识AudioUnitElement
应该设置为0,所以后面与输出相关的AudioUnitElement
都是0
代码中使用的是 OUTPUT_BUS
总线
:是计算机各种功能部件之间传送信息的公共通信干线
- 创建
kAudioUnitSubType_RemoteIO
类型的AudieUnit
AudioComponentDescription acd;
acd.componentType = kAudioUnitType_Output;
acd.componentSubType = kAudioUnitSubType_RemoteIO;
acd.componentManufacturer = kAudioUnitManufacturer_Apple;
acd.componentFlags = 0;
acd.componentFlagsMask = 0;
AudioComponent ioUnitRef = AudioComponentFindNext(NULL, &acd);
status = AudioComponentInstanceNew(ioUnitRef, &_ioUnit);
2.连接扬声器,注意AudioUnitScope
设置为kAudioUnitScope_Output
类型,
UInt32 OUTPUT_BUS = 0;
UInt32 flag = 1;
if (flag) {
status = AudioUnitSetProperty(_ioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, OUTPUT_BUS, &flag, sizeof(flag));
}
- 设置输入格式 ,此处是播放文件,所以需要设置输入格式
AudioStreamBasicDescription asbd;
asbd.mSampleRate = 44100;
asbd.mFormatID = kAudioFormatLinearPCM; // PCM格式
asbd.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger; // 整形
asbd.mFramesPerPacket = 1; // 每帧只有1个packet
asbd.mChannelsPerFrame = 1; // 声道数
asbd.mBytesPerFrame = 2; // 每帧只有2个byte 声道*位深*Packet数
asbd.mBytesPerPacket = 2; // 每个Packet只有2个byte
asbd.mBitsPerChannel = 16; // 位深
UInt32 OUTPUT_BUS = 0;
status = AudioUnitSetProperty(_ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, OUTPUT_BUS, &asbd, sizeof(asbd));
4.设置回调
AURenderCallbackStruct callBack;
callBack.inputProc = PlayCallBack;
callBack.inputProcRefCon = (__bridge void *)self;
status = AudioUnitSetProperty(_ioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, OUTPUT_BUS, &callBack, sizeof(callBack));
OSStatus PlayCallBack( void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * __nullable ioData) {
RGAudioBasePlay *player = (__bridge RGAudioBasePlay *)inRefCon;
ioData->mBuffers[0].mDataByteSize = (UInt32)[player->_inputSteam read:ioData->mBuffers[0].mData maxLength:(NSInteger)ioData->mBuffers[0].mDataByteSize];;
NSLog(@"out size: %d", ioData->mBuffers[0].mDataByteSize);
if (ioData->mBuffers[0].mDataByteSize <= 0) {
dispatch_async(dispatch_get_main_queue(), ^{
// [player stop];
});
}
return noErr;
}
4.播放
AudioOutputUnitStart(_ioUnit);
播放时,需要知道PCM原文件的格式是怎么样的,否走播放很容易就出错
例如一个PMC原文件格式是, 播放格式需要设置对应的格式才容易正确的播放
File format:
Sample Rate: 44100
Format ID: lpcm
Format Flags: kAudioFormatFlagIsSignedInteger
Bytes per Packet: 4
Frames per Packet: 1
Bytes per Frame: 4
Channels per Frame: 2
Bits per Channel: 16
设置输出格式的AudioStreamBasicDescription
原则
mBitsPerChannel
每个声道几位表示, 一般为 16位
mSampleRate
采样率一般为44100
mFormatFlags
为量化的类型,一般的数据都是整型
mFramesPerPacket
每个包多少帧,一般音频为1帧
mChannelsPerFrame
每帧的声道数(可以为1,也可以为2)
mBytesPerFrame
每帧多少个字节,帧是最小单位,所以最小值
可以设置为
n * mChannelsPerFrame * mBitsPerChannel / 8 (n为大于等于1的整数)
mBytesPerPacket
等于mBytesPerFrame
注意mBytesPerFrame设置越大,渲染回调时,所需要读取的字节数越大ioData->mBuffers[0].mDataByteSize
值越大,ioData->mBuffers[0].mDataByteSize
只跟mBytesPerFrame
有关,单位时间里面ioData->mBuffers[0].填充的数据越多,音频播放的越快
OSStatus PlayCallBack( void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * __nullable ioData) {
ioData->mBuffers[0].mDataByteSize = (UInt32)[player->_inputSteam read:player->_buffer maxLength:(NSInteger)ioData->mBuffers[0].mDataByteSize] ;
NSLog(@"out size: %d", ioData->mBuffers[0].mDataByteSize);
return noErr;
}
AudioStreamBasicDescription asbd;
asbd.mSampleRate = 44100;
asbd.mFormatID = kAudioFormatLinearPCM; //编码格式
asbd.mFormatFlags = kAudioFormatFlagIsSignedInteger ; //每个样本量化的类型(这里是float)
asbd.mBytesPerPacket = 8;
asbd.mFramesPerPacket = 1;
asbd.mBytesPerFrame = 8;
asbd.mChannelsPerFrame = 1;
asbd.mBitsPerChannel = 2 * 8;
网友评论