美文网首页
AudioUnit混音解读

AudioUnit混音解读

作者: BohrIsLay | 来源:发表于2022-03-19 13:11 被阅读0次

概述

本文展示如何将两个音频文件数据读取到内存,作为MixerAudioUnit的两路输入,混合后将MixerAudioUnit的数据输出作为Remot I/O AudioUnit的输入,最后启动AudioGraph播放。

另外还将讲解如何调节两路音频的输入的音量,以及调节混合后音频的音量

着重讲解下,链路拉取音频数据时,如何给MixerAudioUnit填充数据,这个很多文章没有讲清楚这里为啥这么填充,填充后是什么效果,怎么填充可以播放整个文件,怎样填充可以循环播放,或者填充时有哪些注意点,可能会导致的音频出现鞭炮声,播放不出等等原因

1.将本地两个音频文件读取到内存

将本地文件读取到ExtAudioFileRef里面

CFURLRef sourceURL[2];// 这个是加载的文件URL
NSString *sourceA = [[NSBundle mainBundle] pathForResource:@"GuitarMonoSTP" ofType:@"aif"];
NSString *sourceB = [[NSBundle mainBundle] pathForResource:@"DrumsMonoSTP" ofType:@"aif"];
sourceURL[0] = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)sourceA, kCFURLPOSIXPathStyle, false);
sourceURL[1] = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)sourceB, kCFURLPOSIXPathStyle, false);
ExtAudioFileRef xafref = 0;
OSStatus result = ExtAudioFileOpenURL(sourceURL[i], &xafref);// 打开URL到文件xafref
 

从ExtAudioFileRef读取设定的音频格式的数据

 // 设定的输出格式:音频格式为PCM格式,采样率44100, 采样位数4个字节AVAudioPCMFormatFloat32,通道数1,交互存储(通道数为1的时候,交互存储有意义吗?我这里认为是没有意义的,填写YES或者NO都可以)
    AVAudioFormat *clientFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32
                                                         sampleRate:kGraphSampleRate
                                                         channels:1
                                                         interleaved:YES];
// 获取文件的数据格式fileFormat                                                         
 // get the file data format, this represents the file's actual data format
AudioStreamBasicDescription fileFormat;
UInt32 propSize = sizeof(fileFormat);
OSStatus result = ExtAudioFileGetProperty(xafref, kExtAudioFileProperty_FileDataFormat, &propSize, &fileFormat);


 // get the file's length in sample frames 获取文件的长度, 以帧数计的长度,即这个文件有多少帧
UInt64 numFrames = 0;
propSize = sizeof(numFrames);
result = ExtAudioFileGetProperty(xafref, kExtAudioFileProperty_FileLengthFrames, &propSize, &numFrames);

// 设置文件的输出格式
propSize = sizeof(AudioStreamBasicDescription);
result = ExtAudioFileSetProperty(xafref, kExtAudioFileProperty_ClientDataFormat, propSize, clientFormat.streamDescription);

 // used to account for any sample rate conversion 采样率比
double rateRatio = kGraphSampleRate / fileFormat.mSampleRate;
numFrames = (numFrames * rateRatio); // account for any sample rate conversion 计算输出后的长度,以帧数计

// set up our buffer
mSoundBuffer[i].numFrames = (UInt32)numFrames;
mSoundBuffer[i].asbd = *(clientFormat.streamDescription);
        
UInt32 samples = (UInt32)numFrames * mSoundBuffer[i].asbd.mChannelsPerFrame;
mSoundBuffer[i].data = (Float32 *)calloc(samples, sizeof(Float32));
mSoundBuffer[i].sampleNum = 0;

 // set up a AudioBufferList to read data into
AudioBufferList bufList;
bufList.mNumberBuffers = 1;
bufList.mBuffers[0].mNumberChannels = 1;
bufList.mBuffers[0].mData = mSoundBuffer[i].data;
bufList.mBuffers[0].mDataByteSize = samples * sizeof(Float32);

// perform a synchronous sequential read of the audio data out of the file into our allocated data buffer
UInt32 numPackets = (UInt32)numFrames;
result = ExtAudioFileRead(xafref, &numPackets, &bufList);

2.建立AUGraph,将两个文件的音频数据作为Mixer AudioUnit的输入

image.png

首先有几个点需要说下,Mixer Unit的InputScope有多个Element, OutputScope有1个Element,和Remote I/O Unit有些不一样,Remote I//O Unit 有2个Element,但是每个Element有各自的InputScope和OutputScope
建立Augraph就是建立这个连接

2.1 建立AUGraph

1.生成Mixer AudioUnit 和Remote I/O AudioUnit

  AUNode outputNode;
    AUNode mixerNode;
    
    // this is the format for the graph
    mAudioFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32
                                          sampleRate:kGraphSampleRate
                                          channels:2
                                          interleaved:NO];
                                          
// create a new AUGraph
    result = NewAUGraph(&mGraph);
    
 // output unit
    CAComponentDescription output_desc(kAudioUnitType_Output, kAudioUnitSubType_RemoteIO, kAudioUnitManufacturer_Apple);
    CAShowComponentDescription(&output_desc);
    
    // multichannel mixer unit
    /**
     多通道MixerUnit 的InputScope 可以设置有多个Element,outputScope有1个Element
     */
    CAComponentDescription mixer_desc(kAudioUnitType_Mixer, kAudioUnitSubType_MultiChannelMixer, kAudioUnitManufacturer_Apple);
    CAShowComponentDescription(&mixer_desc);

    printf("new nodes\n");

    // create a node in the graph that is an AudioUnit, using the supplied AudioComponentDescription to find and open that unit
    result = AUGraphAddNode(mGraph, &output_desc, &outputNode);
    if (result) { printf("AUGraphNewNode 1 result %ld %4.4s\n", (long)result, (char*)&result); return; }

    result = AUGraphAddNode(mGraph, &mixer_desc, &mixerNode );
    if (result) { printf("AUGraphNewNode 2 result %ld %4.4s\n", (long)result, (char*)&result); return; }
2.2 连接mixer unit 和 remote i/o unit

这里是将Mixer Unit的0 Element 输出作为Remote Unit的0Element的输入, Remote I/O Unit的默认是0element的输出作为1Element的输入,1Element的输出是扬声器


// connect a node's output to a node's input
    result = AUGraphConnectNodeInput(mGraph, mixerNode, 0, outputNode, 0);
2.3 将音频文件的数据设置为Mixer unit的输入
for (int i = 0; i < numbuses; ++i) {
        // setup render callback struct
        AURenderCallbackStruct rcbs;
        rcbs.inputProc = &renderInput;
        rcbs.inputProcRefCon = mSoundBuffer;
        
        printf("set kAudioUnitProperty_SetRenderCallback for mixer input bus %d\n", i);
        
        // Set a callback for the specified node's specified input
        /**
          设置InputScope的第i个 Element的输入回调
         */
        result = AUGraphSetNodeInputCallback(mGraph, mixerNode, i, &rcbs);
        // equivalent to AudioUnitSetProperty(mMixer, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, i, &rcbs, sizeof(rcbs));
        if (result) { printf("AUGraphSetNodeInputCallback result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }

        // set input stream format to what we want
        printf("set mixer input kAudioUnitProperty_StreamFormat for bus %d\n", i);
        // 设置InputScope的第i个 Element的音频格式
        result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, i, mAudioFormat.streamDescription, sizeof(AudioStreamBasicDescription));
        if (result) { printf("AudioUnitSetProperty result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
    }
2.4 设置Mixer Unit的输出格式
 // 设置OutputScope的第0个 Element的音频格式,一般就只有1个Element
    result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, mAudioFormat.streamDescription, sizeof(AudioStreamBasicDescription));
2.5 设置Remote Unit的输出格式
 result = AudioUnitSetProperty(mOutput, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, mAudioFormat.streamDescription, sizeof(AudioStreamBasicDescription));

3.启动AUGraph

建立AUGraph之后,就可以启动工作了,一旦启动,需要数据时,就会触发我们前面设置的inputCallback,需要在这里对其填充数据

3.1 启动
result = AUGraphInitialize(mGraph);
OSStatus result = AUGraphStart(mGraph);
3.2 音频拉取数据回调时填充数据

因为我们设置了数据输出格式是2通道的非交叉类型,即LLLLRRRR,而不是LRLRLR,需要给ioData[0]填充左声道数据,给ioData【1】填充右声道数据,这个填充可以任意填你想要的效果,比如我左声道就第一个音频的数据,右声道填第二个音频的数据,这样播放出来的效果就是左扬声器只有第一个音频的声音,右扬声器只有第二个音频的声音;或者我左右声道填2个数据,这样左右扬声器播放出来的声音是混音。

这个拉取音频数据回调函数,是每次需要数据时都会调用,如果这里耗时,则会导致出现声音数据获取不及时,出现卡顿,一会有声音,一会没有,听起来像放鞭炮一样,或者没有声音播放出来

循环播放原理很简单:就是通过记录拉取的数是多少帧,给回调函数填充数据时,当拉取道最后一帧时,重新设置sample=0,从头开始拉取,这样,就实现了循环播放

static OSStatus renderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
    /**
     AURenderCallbackStruct rcbs;
     rcbs.inputProc = &renderInput;
     rcbs.inputProcRefCon = mSoundBuffer;
     当时设置的inRefCon为mSoundBuffer,其是一个数组 SoundBuffer mSoundBuffer[MAXBUFS]; 或者说是SoundBuffer的一个指针SoundBufferPtr
    
     */
    SoundBufferPtr sndbuf = (SoundBufferPtr)inRefCon;
    
    UInt32 sample = sndbuf[inBusNumber].sampleNum;      // frame number to start from
    UInt32 bufSamples = sndbuf[inBusNumber].numFrames;  // total number of frames in the sound buffer 输入的文件总的帧数
    Float32 *in = sndbuf[inBusNumber].data; // audio data buffer
    // mixerUnit设置输出为2通道
    Float32 *outA = (Float32 *)ioData->mBuffers[0].mData; // output audio buffer for L channel
    Float32 *outB = (Float32 *)ioData->mBuffers[1].mData; // output audio buffer for R channel
    
    // for demonstration purposes we've configured 2 stereo input busses for the mixer unit
    // but only provide a single channel of data from each input bus when asked and silence for the other channel
    // alternating as appropriate when asked to render bus 0 or bus 1's input // 交替渲染 bus 0 or bus 1
    /**
      mixerUnit的输出格式,不交错,所以(Float32 *)ioData->mBuffers[0].mData; // output audio buffer for L channel
     (Float32 *)ioData->mBuffers[1].mData; // output audio buffer for R channel
     mAudioFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32
                                           sampleRate:kGraphSampleRate
                                           channels:2
                                           interleaved:NO];
     */
    for (UInt32 i = 0; i < inNumberFrames; ++i) {
        
        // 这种方式是两个通道交替渲染bus0,和bus1的声音
        if (1 == inBusNumber) {
            outA[i] = 0;
            outB[i] = in[sample++];
        } else {
            outA[i] = in[sample++];
            outB[i] = 0;
        }
        
//        // 这种方式是两个通道把各自把bus0和bus1的一起渲染
//        if (1 == inBusNumber) {//
//            outB[i] = in[sample++];
//            outA[i] = outB[i] ;
//        } else {
//            outA[i] = in[sample++];
//            outB[i] = outA[i];
//        }
        // 控制进行循环播放的
        if (sample > bufSamples) {
            // start over from the beginning of the data, our audio simply loops
            printf("looping data for bus %d after %ld source frames rendered\n", (unsigned int)inBusNumber, (long)sample-1);
            sample = 0;
        }
    }
    // 记录播放的位置,下一次拉取音频数据时,就能往后拉取,通过这个控制拉取不同位置的数据,以及控制循环播放
    sndbuf[inBusNumber].sampleNum = sample; // keep track of where we are in the source data buffer
    
    // 加入下面的打印等耗时操作,就会出现鞭炮声(即某一时刻获取声音数据时延迟卡顿导致播放不了)
//    printf("bus %d sample %d\n", (unsigned int)inBusNumber, (unsigned int)sample);
//    for (int i = 0; i < 10; i++) {
//        NSObject *obj = [NSObject new];
//
//        NSLog(@"bus %@ \n", obj);
//    }
    
    return noErr;
}

4.调节音量

调节音量直接设置AudioUnit的属性即可

给两路音频控制其输入音量大小

// sets the input volume for a specific bus
- (void)setInputVolume:(UInt32)inputNum value:(AudioUnitParameterValue)value
{
    OSStatus result = AudioUnitSetParameter(mMixer, kMultiChannelMixerParam_Volume, kAudioUnitScope_Input, inputNum, value, 0);
    if (result) { printf("AudioUnitSetParameter kMultiChannelMixerParam_Volume Input result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
}

给混合后的音频设置其输出音量大小

// sets the overall mixer output volume
- (void)setOutputVolume:(AudioUnitParameterValue)value
{
    OSStatus result = AudioUnitSetParameter(mMixer, kMultiChannelMixerParam_Volume, kAudioUnitScope_Output, 0, value, 0);
    if (result) { printf("AudioUnitSetParameter kMultiChannelMixerParam_Volume Output result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
}
5.下面放的是Demo里关键代码

demo其实是苹果官方的demo,我在里面加了些注释而已

https://developer.apple.com/library/archive/samplecode/iOSMultichannelMixerTest/Introduction/Intro.html#//apple_ref/doc/uid/TP40016060

MultichannelMixerController.h

/*
    Copyright (C) 2015 Apple Inc. All Rights Reserved.
    See LICENSE.txt for this sample’s licensing information
    
    Abstract:
    The Controller Class for the AUGraph.
*/

#import <AudioToolbox/AudioToolbox.h>
#import <AudioUnit/AudioUnit.h>
#import <AVFoundation/AVAudioFormat.h>

#import "CAComponentDescription.h"

#define MAXBUFS  2
#define NUMFILES 2

typedef struct {
    AudioStreamBasicDescription asbd;
    Float32 *data;
    UInt32 numFrames;
    UInt32 sampleNum;
} SoundBuffer, *SoundBufferPtr;

@interface MultichannelMixerController : NSObject
{
    CFURLRef sourceURL[2];
    
    AVAudioFormat *mAudioFormat;
    
    AUGraph   mGraph;
    AudioUnit mMixer;
    AudioUnit mOutput;
    
    SoundBuffer mSoundBuffer[MAXBUFS];

    Boolean isPlaying;
}

@property (readonly, nonatomic) Boolean isPlaying;

- (void)initializeAUGraph;

- (void)enableInput:(UInt32)inputNum isOn:(AudioUnitParameterValue)isONValue;
- (void)setInputVolume:(UInt32)inputNum value:(AudioUnitParameterValue)value;
- (void)setOutputVolume:(AudioUnitParameterValue)value;

- (void)startAUGraph;
- (void)stopAUGraph;

@end

MultichannelMixerController.m

/*
    Copyright (C) 2015 Apple Inc. All Rights Reserved.
    See LICENSE.txt for this sample’s licensing information
    
    Abstract:
    The Controller Class for the AUGraph.
*/

#import "MultiChannelMixerController.h"

const Float64 kGraphSampleRate = 44100.0; // 48000.0 optional tests

#pragma mark- RenderProc

/**
 这个方法是专门为音频拉取数据调用时候填充数据
 */
// audio render procedure, don't allocate memory, don't take any locks, don't waste time, printf statements for debugging only may adversly affect render you have been warned
static OSStatus renderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
    /**
     AURenderCallbackStruct rcbs;
     rcbs.inputProc = &renderInput;
     rcbs.inputProcRefCon = mSoundBuffer;
     当时设置的inRefCon为mSoundBuffer,其是一个数组 SoundBuffer mSoundBuffer[MAXBUFS]; 或者说是SoundBuffer的一个指针SoundBufferPtr
    
     */
    SoundBufferPtr sndbuf = (SoundBufferPtr)inRefCon;
    
    UInt32 sample = sndbuf[inBusNumber].sampleNum;      // frame number to start from
    UInt32 bufSamples = sndbuf[inBusNumber].numFrames;  // total number of frames in the sound buffer 输入的文件总的帧数
    Float32 *in = sndbuf[inBusNumber].data; // audio data buffer
    // mixerUnit设置输出为2通道
    Float32 *outA = (Float32 *)ioData->mBuffers[0].mData; // output audio buffer for L channel
    Float32 *outB = (Float32 *)ioData->mBuffers[1].mData; // output audio buffer for R channel
    
    // for demonstration purposes we've configured 2 stereo input busses for the mixer unit
    // but only provide a single channel of data from each input bus when asked and silence for the other channel
    // alternating as appropriate when asked to render bus 0 or bus 1's input // 交替渲染 bus 0 or bus 1
    /**
      mixerUnit的输出格式,不交错,所以(Float32 *)ioData->mBuffers[0].mData; // output audio buffer for L channel
     (Float32 *)ioData->mBuffers[1].mData; // output audio buffer for R channel
     mAudioFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32
                                           sampleRate:kGraphSampleRate
                                           channels:2
                                           interleaved:NO];
     */
    for (UInt32 i = 0; i < inNumberFrames; ++i) {
        
        // 这种方式是两个通道交替渲染bus0,和bus1的声音
        if (1 == inBusNumber) {
            outA[i] = 0;
            outB[i] = in[sample++];
        } else {
            outA[i] = in[sample++];
            outB[i] = 0;
        }
        
//        // 这种方式是两个通道把各自把bus0和bus1的一起渲染
//        if (1 == inBusNumber) {//
//            outB[i] = in[sample++];
//            outA[i] = outB[i] ;
//        } else {
//            outA[i] = in[sample++];
//            outB[i] = outA[i];
//        }
        // 控制进行循环播放的
        if (sample > bufSamples) {
            // start over from the beginning of the data, our audio simply loops
            printf("looping data for bus %d after %ld source frames rendered\n", (unsigned int)inBusNumber, (long)sample-1);
            sample = 0;
        }
    }
    // 记录播放的位置
    sndbuf[inBusNumber].sampleNum = sample; // keep track of where we are in the source data buffer
    
    // 加入下面的打印等耗时操作,就会出现鞭炮声(即某一时刻获取声音数据时延迟卡顿导致播放不了)
//    printf("bus %d sample %d\n", (unsigned int)inBusNumber, (unsigned int)sample);
//    for (int i = 0; i < 10; i++) {
//        NSObject *obj = [NSObject new];
//
//        NSLog(@"bus %@ \n", obj);
//    }
    
    return noErr;
}

#pragma mark- MultichannelMixerController

@interface MultichannelMixerController (hidden)
 
- (void)loadFiles;
 
@end

@implementation MultichannelMixerController

@synthesize isPlaying;

- (void)dealloc
{    
    printf("MultichannelMixerController dealloc\n");
    
    DisposeAUGraph(mGraph);
    
    free(mSoundBuffer[0].data);
    free(mSoundBuffer[1].data);
    
    CFRelease(sourceURL[0]);
    CFRelease(sourceURL[1]);
    
    [mAudioFormat release];

    [super dealloc];
}

- (void)awakeFromNib
{
    printf("awakeFromNib\n");
    
    isPlaying = false;

    // clear the mSoundBuffer struct
    memset(&mSoundBuffer, 0, sizeof(mSoundBuffer));
    
    // create the URLs we'll use for source A and B
    NSString *sourceA = [[NSBundle mainBundle] pathForResource:@"GuitarMonoSTP" ofType:@"aif"];
    NSString *sourceB = [[NSBundle mainBundle] pathForResource:@"DrumsMonoSTP" ofType:@"aif"];
    sourceURL[0] = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)sourceA, kCFURLPOSIXPathStyle, false);
    sourceURL[1] = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)sourceB, kCFURLPOSIXPathStyle, false);
}

- (void)initializeAUGraph
{
    printf("initialize\n");
    
    AUNode outputNode;
    AUNode mixerNode;
    
    // this is the format for the graph
    mAudioFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32
                                          sampleRate:kGraphSampleRate
                                          channels:2
                                          interleaved:NO];
    
    OSStatus result = noErr;
    
    // load up the audio data
    [self performSelectorInBackground:@selector(loadFiles) withObject:nil];
    
    // create a new AUGraph
    result = NewAUGraph(&mGraph);
    if (result) { printf("NewAUGraph result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
    
    // create two AudioComponentDescriptions for the AUs we want in the graph
    
    // output unit
    CAComponentDescription output_desc(kAudioUnitType_Output, kAudioUnitSubType_RemoteIO, kAudioUnitManufacturer_Apple);
    CAShowComponentDescription(&output_desc);
    
    // multichannel mixer unit
    /**
     多通道MixerUnit 的InputScope 可以设置有多个Element,outputScope有1个Element
     */
    CAComponentDescription mixer_desc(kAudioUnitType_Mixer, kAudioUnitSubType_MultiChannelMixer, kAudioUnitManufacturer_Apple);
    CAShowComponentDescription(&mixer_desc);

    printf("new nodes\n");

    // create a node in the graph that is an AudioUnit, using the supplied AudioComponentDescription to find and open that unit
    result = AUGraphAddNode(mGraph, &output_desc, &outputNode);
    if (result) { printf("AUGraphNewNode 1 result %ld %4.4s\n", (long)result, (char*)&result); return; }

    result = AUGraphAddNode(mGraph, &mixer_desc, &mixerNode );
    if (result) { printf("AUGraphNewNode 2 result %ld %4.4s\n", (long)result, (char*)&result); return; }

    // connect a node's output to a node's input
    result = AUGraphConnectNodeInput(mGraph, mixerNode, 0, outputNode, 0);
    if (result) { printf("AUGraphConnectNodeInput result %ld %4.4s\n", (long)result, (char*)&result); return; }
    
    // open the graph AudioUnits are open but not initialized (no resource allocation occurs here)
    result = AUGraphOpen(mGraph);
    if (result) { printf("AUGraphOpen result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
    
    result = AUGraphNodeInfo(mGraph, mixerNode, NULL, &mMixer);
    if (result) { printf("AUGraphNodeInfo result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
    
    result = AUGraphNodeInfo(mGraph, outputNode, NULL, &mOutput);
    if (result) { printf("AUGraphNodeInfo result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }

    // set bus count
    UInt32 numbuses = 2;
    
    printf("set input bus count %u\n", (unsigned int)numbuses);
    /**
     多通道MixerUnit 的InputScope 可以设置有多个Element,outputScope有1个Element
     */
    result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0, &numbuses, sizeof(numbuses));
    if (result) { printf("AudioUnitSetProperty result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }

    for (int i = 0; i < numbuses; ++i) {
        // setup render callback struct
        AURenderCallbackStruct rcbs;
        rcbs.inputProc = &renderInput;
        rcbs.inputProcRefCon = mSoundBuffer;
        
        printf("set kAudioUnitProperty_SetRenderCallback for mixer input bus %d\n", i);
        
        // Set a callback for the specified node's specified input
        /**
          设置InputScope的第i个 Element的输入回调
         */
        result = AUGraphSetNodeInputCallback(mGraph, mixerNode, i, &rcbs);
        // equivalent to AudioUnitSetProperty(mMixer, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, i, &rcbs, sizeof(rcbs));
        if (result) { printf("AUGraphSetNodeInputCallback result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }

        // set input stream format to what we want
        printf("set mixer input kAudioUnitProperty_StreamFormat for bus %d\n", i);
        // 设置InputScope的第i个 Element的音频格式
        result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, i, mAudioFormat.streamDescription, sizeof(AudioStreamBasicDescription));
        if (result) { printf("AudioUnitSetProperty result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
    }
    
    // set output stream format to what we want
    printf("set output kAudioUnitProperty_StreamFormat\n");
    // 设置OutputScope的第0个 Element的音频格式,一般就只有1个Element
    result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, mAudioFormat.streamDescription, sizeof(AudioStreamBasicDescription));
    if (result) { printf("AudioUnitSetProperty result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
    // 设置Remote IO Unit 的OutputScope的第1个 Element的音频格式,有2个Element,第0个Element 0 的系统默认输入为麦克风,可以配置第0个Element的输出为第1个Element的输入,第1个Element的输出为扬声器,这里是设置第1个Element的输出格式
    result = AudioUnitSetProperty(mOutput, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, mAudioFormat.streamDescription, sizeof(AudioStreamBasicDescription));
    if (result) { printf("AudioUnitSetProperty result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
        
    printf("AUGraphInitialize\n");
    
    // now that we've set everything up we can initialize the graph, this will also validate the connections
    result = AUGraphInitialize(mGraph);
    if (result) { printf("AUGraphInitialize result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
    
    CAShow(mGraph);
}

// load up audio data from the demo files into mSoundBuffer.data used in the render proc
- (void)loadFiles
{
    // 输出格式:音频格式为PCM格式,采样率44100, 采样位数4个字节AVAudioPCMFormatFloat32,通道数1,交互存储(通道数为1的时候,交互存储有意义吗?)
    AVAudioFormat *clientFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32
                                                         sampleRate:kGraphSampleRate
                                                         channels:1
                                                         interleaved:YES];
    
    for (int i = 0; i < NUMFILES && i < MAXBUFS; i++)  {
        printf("loadFiles, %d\n", i);
        
        ExtAudioFileRef xafref = 0;
        
        // open one of the two source files 打开文件xafref
        OSStatus result = ExtAudioFileOpenURL(sourceURL[i], &xafref);
        if (result || !xafref) { printf("ExtAudioFileOpenURL result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); break; }
        
        // get the file data format, this represents the file's actual data format
        AudioStreamBasicDescription fileFormat;
        UInt32 propSize = sizeof(fileFormat);
        // 获取文件的数据格式fileFormat
        result = ExtAudioFileGetProperty(xafref, kExtAudioFileProperty_FileDataFormat, &propSize, &fileFormat);
        if (result) { printf("ExtAudioFileGetProperty kExtAudioFileProperty_FileDataFormat result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); break; }
        
        // set the client format - this is the format we want back from ExtAudioFile and corresponds to the format
        // we will be providing to the input callback of the mixer, therefore the data type must be the same
        
        // used to account for any sample rate conversion 采样率比
        double rateRatio = kGraphSampleRate / fileFormat.mSampleRate;
        
        // 设置文件的输出格式
        propSize = sizeof(AudioStreamBasicDescription);
        result = ExtAudioFileSetProperty(xafref, kExtAudioFileProperty_ClientDataFormat, propSize, clientFormat.streamDescription);
        if (result) { printf("ExtAudioFileSetProperty kExtAudioFileProperty_ClientDataFormat %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); break; }
        
        // get the file's length in sample frames 获取文件的长度, 以帧数计的长度
        UInt64 numFrames = 0;
        propSize = sizeof(numFrames);
        result = ExtAudioFileGetProperty(xafref, kExtAudioFileProperty_FileLengthFrames, &propSize, &numFrames);
        if (result) { printf("ExtAudioFileGetProperty kExtAudioFileProperty_FileLengthFrames result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); break; }
        printf("File %d, Number of Sample Frames: %u\n", i, (unsigned int)numFrames);
        
        numFrames = (numFrames * rateRatio); // account for any sample rate conversion 计算输出后的长度,以帧数计
        printf("File %d, Number of Sample Frames after rate conversion (if any): %u\n", i, (unsigned int)numFrames);
        
        // set up our buffer
        mSoundBuffer[i].numFrames = (UInt32)numFrames;
        mSoundBuffer[i].asbd = *(clientFormat.streamDescription);
        
        UInt32 samples = (UInt32)numFrames * mSoundBuffer[i].asbd.mChannelsPerFrame;
        mSoundBuffer[i].data = (Float32 *)calloc(samples, sizeof(Float32));
        mSoundBuffer[i].sampleNum = 0;
        
        // set up a AudioBufferList to read data into
        AudioBufferList bufList;
        bufList.mNumberBuffers = 1;//几个数据数组
        bufList.mBuffers[0].mNumberChannels = 1;// 通道数,输入音频的通道数都是1的
        bufList.mBuffers[0].mData = mSoundBuffer[i].data;// 数据地址
        bufList.mBuffers[0].mDataByteSize = samples * sizeof(Float32);// 数据大小

        // perform a synchronous sequential read of the audio data out of the file into our allocated data buffer
        UInt32 numPackets = (UInt32)numFrames;
        result = ExtAudioFileRead(xafref, &numPackets, &bufList);
        if (result) {
            printf("ExtAudioFileRead result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result);
            free(mSoundBuffer[i].data);
            mSoundBuffer[i].data = 0;
        }
        
        // close the file and dispose the ExtAudioFileRef
        ExtAudioFileDispose(xafref);
    }
    
    [clientFormat release];
}

#pragma mark-

// enable or disables a specific bus
- (void)enableInput:(UInt32)inputNum isOn:(AudioUnitParameterValue)isONValue
{
    printf("BUS %d isON %f\n", (unsigned int)inputNum, isONValue);
         
    OSStatus result = AudioUnitSetParameter(mMixer, kMultiChannelMixerParam_Enable, kAudioUnitScope_Input, inputNum, isONValue, 0);
    if (result) { printf("AudioUnitSetParameter kMultiChannelMixerParam_Enable result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }

}

// sets the input volume for a specific bus
- (void)setInputVolume:(UInt32)inputNum value:(AudioUnitParameterValue)value
{
    OSStatus result = AudioUnitSetParameter(mMixer, kMultiChannelMixerParam_Volume, kAudioUnitScope_Input, inputNum, value, 0);
    if (result) { printf("AudioUnitSetParameter kMultiChannelMixerParam_Volume Input result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
}

// sets the overall mixer output volume
- (void)setOutputVolume:(AudioUnitParameterValue)value
{
    OSStatus result = AudioUnitSetParameter(mMixer, kMultiChannelMixerParam_Volume, kAudioUnitScope_Output, 0, value, 0);
    if (result) { printf("AudioUnitSetParameter kMultiChannelMixerParam_Volume Output result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
}

// stars render
- (void)startAUGraph
{
    printf("PLAY\n");
    
    OSStatus result = AUGraphStart(mGraph);
    if (result) { printf("AUGraphStart result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
    isPlaying = true;
}

// stops render
- (void)stopAUGraph
{
    printf("STOP\n");

    Boolean isRunning = false;
    
    OSStatus result = AUGraphIsRunning(mGraph, &isRunning);
    if (result) { printf("AUGraphIsRunning result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
    
    if (isRunning) {
        result = AUGraphStop(mGraph);
        if (result) { printf("AUGraphStop result %ld %08lX %4.4s\n", (long)result, (long)result, (char*)&result); return; }
        isPlaying = false;
    }
}

@end

相关文章

  • AudioUnit混音解读

    概述 本文展示如何将两个音频文件数据读取到内存,作为MixerAudioUnit的两路输入,混合后将MixerAu...

  • AudioUnit音频特效

    概述 本文在AudioUnit混音解读[https://www.jianshu.com/p/9ef842d4be9...

  • audioUnit混音

    demo地址,AudioMusicMixer这个target。 使用AudioUnitGraph来实现一个混音功能...

  • iOS AudioUnit实时录音与播放

    AudioUnit是iOS底层音频框架,可以用来进行混音、均衡、格式转换、实时IO录制、回放、离线渲染、语音对讲(...

  • 音频采集

    音频采集 音频采集的方式 AudioUnit音频单元 AudioUnit总结 最底层 AVFoundation...

  • AudioUnit 框架详细解析

    1. AudioUnit框架详细解析(一) —— 基本概览2. AudioUnit框架详细解析(二) —— 关于A...

  • iOS使用AudioUnit/AudioQueue实现耳返功能

    首先理清思路我这边使用AudioUnit录音,AudioQueue播放1、创建AudioUnit对象,并初始化设置...

  • CoreAudio基础概念

    1.AudioUnit 在所有API中,AudioUnit延迟最短,使用最灵活.代价很复杂。 2.Audio Fi...

  • iOS AudioUnit 总结

    iOS AudioUnit 总结 iOS 的 AudioUnit 功能十分强大,使用图的形式连接各个节点,来实现我...

  • iOS音频-audioUnit总结

    在看LFLiveKit代码的时候,看到音频部分使用的是audioUnit做的,所以把audioUnit学习了一下。...

网友评论

      本文标题:AudioUnit混音解读

      本文链接:https://www.haomeiwen.com/subject/obigdrtx.html