LFLiveKit源码分析

作者: february29 | 来源:发表于2018-03-05 14:40 被阅读734次

    概况

    LFLiveKit是一款可以实时的将采集到的音视频数据以流式上传到服务器。框架使用GPUImage框架完成视频采集可以方便的增加滤镜设置美颜。强大的美颜功能,也使得LFLiveKit成为当下直播平台非常受欢迎的一款推流库。

    开启直播很简单如下:

    - (LFLiveSession*)session {
        if (!_session) {
            _session = [[LFLiveSession alloc] initWithAudioConfiguration:[LFLiveAudioConfiguration defaultConfiguration] videoConfiguration:[LFLiveVideoConfiguration defaultConfiguration]];
            _session.running = YES;
            _session.preView = self.livingPreView;
            _session.delegate = self;
        }
        return _session;
    }
    
    - (void)startLive {
        LFLiveStreamInfo *streamInfo = [LFLiveStreamInfo new];
        streamInfo.url = @"rtmp://10.10.1.71:1935/rtmplive/123";
    //    streamInfo.url = @"rtmp://202.69.69.180:443/live/123";
    //    http://qqpull99.inke.cn/live/1506399510572961.flv?ikHost=tx&ikOp=0&codecInfo=8192
        
        [self.session startLive:streamInfo];
    }
    
    

    LFLiveStreamInfo:用于声明流信息的类,作用类似于model单纯的存储信息。用户可以通过这个类设置去音视频的具体参数,其中包含视频的音视频的相关设置等,其结构如下。

    LFLiveStreamInfo.png

    LFLiveAudioSampleRate,LFLiveAudioBitRate是音频采样率、码率的结构体。
    LFLiveAudioQuality是音频质量的结构体,定义了几个特定采样率、码率。

    LFLiveVideoSessionPreset是视频分辨率的结构体定义了低、中、高3中16:9的分辨率。
    LFLiveVideoQuality是视频质量的结构体定义了几种特定分辨率、帧数、码率。

    这里我们把流程分为四个过程:

    • 音视频参数初始化
    • 音视频采集
    • 音视频编码
    • 推流
    - (LFLiveSession*)session {
        if (!_session) {
    //初始化
            _session = [[LFLiveSession alloc] initWithAudioConfiguration:[LFLiveAudioConfiguration defaultConfiguration] videoConfiguration:[LFLiveVideoConfiguration defaultConfiguration]];
    //视频采集
            _session.running = YES;
            _session.preView = self.livingPreView;
            _session.delegate = self;
        }
        return _session;
    }
    
    

    上述代码中完成了前三个过程初始化、视频采集、编码。

    - (void)startLive {
        LFLiveStreamInfo *streamInfo = [LFLiveStreamInfo new];
        streamInfo.url = @"rtmp://10.10.1.71:1935/rtmplive/123";
    //    streamInfo.url = @"rtmp://202.69.69.180:443/live/123";
    //    http://qqpull99.inke.cn/live/1506399510572961.flv?ikHost=tx&ikOp=0&codecInfo=8192
        
        [self.session startLive:streamInfo];
    }
    
    

    这段代码完成了第三个过程推流。

    音视频初始化

    初始化过程完成视频相关参数的设定,主要集中在LFLiveSession的初始化过程中。

    - (nullable instancetype)initWithAudioConfiguration:(nullable LFLiveAudioConfiguration *)audioConfiguration videoConfiguration:(nullable LFLiveVideoConfiguration *)videoConfiguration captureType:(LFLiveCaptureTypeMask)captureType{
        if((captureType & LFLiveCaptureMaskAudio || captureType & LFLiveInputMaskAudio) && !audioConfiguration) @throw [NSException exceptionWithName:@"LFLiveSession init error" reason:@"audioConfiguration is nil " userInfo:nil];
        if((captureType & LFLiveCaptureMaskVideo || captureType & LFLiveInputMaskVideo) && !videoConfiguration) @throw [NSException exceptionWithName:@"LFLiveSession init error" reason:@"videoConfiguration is nil " userInfo:nil];
        if (self = [super init]) {
            _audioConfiguration = audioConfiguration;
            _videoConfiguration = videoConfiguration;
            _adaptiveBitrate = NO;
            _captureType = captureType;
        }
        return self;
    }
    
    

    LFLiveSession的初始化,在初始化过程中首先分别对音视频LFLiveAudioConfiguration、LFLiveVideoConfiguration初始化。

    LFLiveAudioConfiguration的初始化其实就是简单的赋值操作,根据LFLiveAudioQuality (默认LFLiveAudioQuality_Default)赋值。
    如下:

    + (instancetype)defaultConfigurationForQuality:(LFLiveAudioQuality)audioQuality {
        LFLiveAudioConfiguration *audioConfig = [LFLiveAudioConfiguration new];
        audioConfig.numberOfChannels = 2;
        switch (audioQuality) {
        case LFLiveAudioQuality_Low: {
            audioConfig.audioBitrate = audioConfig.numberOfChannels == 1 ? LFLiveAudioBitRate_32Kbps : LFLiveAudioBitRate_64Kbps;
            audioConfig.audioSampleRate = LFLiveAudioSampleRate_16000Hz;
        }
            break;
        case LFLiveAudioQuality_Medium: {
            audioConfig.audioBitrate = LFLiveAudioBitRate_96Kbps;
            audioConfig.audioSampleRate = LFLiveAudioSampleRate_44100Hz;
        }
           break;
        部分省略…
        return audioConfig;
    }
    
    

    LFLiveVideoConfiguration的初始化过程,根据传入的LFLiveVideoQuality(默认LFLiveVideoQuality_Default)UIInterfaceOrientation(默认UIInterfaceOrientationPortrait)赋值参数,具体过程如下。

    + (instancetype)defaultConfigurationForQuality:(LFLiveVideoQuality)videoQuality outputImageOrientation:(UIInterfaceOrientation)outputImageOrientation {
        LFLiveVideoConfiguration *configuration = [LFLiveVideoConfiguration new];
        switch (videoQuality) {
        case LFLiveVideoQuality_Low1:{
            configuration.sessionPreset = LFCaptureSessionPreset360x640;
            configuration.videoFrameRate = 15;
            configuration.videoMaxFrameRate = 15;
            configuration.videoMinFrameRate = 10;
            configuration.videoBitRate = 500 * 1000;
            configuration.videoMaxBitRate = 600 * 1000;
            configuration.videoMinBitRate = 400 * 1000;
            configuration.videoSize = CGSizeMake(360, 640);
        }
            break;
       部分省略…
        case LFLiveVideoQuality_High3:{
            configuration.sessionPreset = LFCaptureSessionPreset720x1280;
            configuration.videoFrameRate = 30;
            configuration.videoMaxFrameRate = 30;
            configuration.videoMinFrameRate = 15;
            configuration.videoBitRate = 1200 * 1000;
            configuration.videoMaxBitRate = 1440 * 1000;
            configuration.videoMinBitRate = 500 * 1000;
            configuration.videoSize = CGSizeMake(720, 1280);
        }
            break;
        default:
            break;
        }
    //设置采集分辨率,
        configuration.sessionPreset = [configuration supportSessionPreset:configuration.sessionPreset];
    //最大关键帧间隔
        configuration.videoMaxKeyframeInterval = configuration.videoFrameRate*2;
    //视频输出方向
        configuration.outputImageOrientation = outputImageOrientation;
    //根据视频采集到的调整videoSize
        CGSize size = configuration.videoSize;
    //通过设备方向改变视频分辨率宽高
        if(configuration.landscape) {
            configuration.videoSize = CGSizeMake(size.height, size.width);
        } else {
            configuration.videoSize = CGSizeMake(size.width, size.height);
        }
        return configuration;
        
    }
    
    

    在设置分辨率的时候会掉用以下方法

    - (LFLiveVideoSessionPreset)supportSessionPreset:(LFLiveVideoSessionPreset)sessionPreset {
        AVCaptureSession *session = [[AVCaptureSession alloc] init];
        AVCaptureDevice *inputCamera;
        NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
        for (AVCaptureDevice *device in devices){
            if ([device position] == AVCaptureDevicePositionFront){
                inputCamera = device;
            }
        }
        AVCaptureDeviceInput *videoInput = [[AVCaptureDeviceInput alloc] initWithDevice:inputCamera error:nil];
        
        if ([session canAddInput:videoInput]){
            [session addInput:videoInput];
        }
        
        if (![session canSetSessionPreset:self.avSessionPreset]) {
            if (sessionPreset == LFCaptureSessionPreset720x1280) {
                sessionPreset = LFCaptureSessionPreset540x960;
                if (![session canSetSessionPreset:self.avSessionPreset]) {
                    sessionPreset = LFCaptureSessionPreset360x640;
                }
            } else if (sessionPreset == LFCaptureSessionPreset540x960) {
                sessionPreset = LFCaptureSessionPreset360x640;
            }
        }
        return sessionPreset;
    }
    
    

    [configuration supportSessionPreset:configuration.sessionPreset] 方法会获取前置摄像头,判断前置摄像头是否能够支持所设置的分辨率,如果无法支持则降低一个档次设置。

    videoSize调整,首先判断是否设置图像等比,如果YES,则只直接返回根据LFLiveVideoQuality设置的videoSize,如果NO, videoSize会根据视频采集的比例进行缩放,缩放后的比例与采集的比例一样,大小会与根据LFLiveVideoQuality设置的videoSize大小进行调整,具体如下:

    - (CGSize)videoSize{
    //根据设定的 _videoSizeRespectingAspectRatio(输出图像是否等比例默认为NO)掉用aspectRatioVideoSize方法。
        if(_videoSizeRespectingAspectRatio){
            return self.aspectRatioVideoSize;
        }
        return _videoSize;
    }
    
    //根据设备采集的视频size比例返回一个缩放的size;返回的size会根据设置_videoSize进行缩放。
    - (CGSize)aspectRatioVideoSize{
    
        CGSize size = AVMakeRectWithAspectRatioInsideRect(self.captureOutVideoSize, CGRectMake(0, 0, _videoSize.width, _videoSize.height)).size;
        NSInteger width = ceil(size.width);
        NSInteger height = ceil(size.height);
        if(width %2 != 0) width = width - 1;
        if(height %2 != 0) height = height - 1;
        return CGSizeMake(width, height);
    }
    
      //根据_sessionPreset返回视频采集size
    - (CGSize)captureOutVideoSize{
        CGSize videoSize = CGSizeZero;
        switch (_sessionPreset) {
            case LFCaptureSessionPreset360x640:{
                videoSize = CGSizeMake(360, 640);
            }
                break;
            case LFCaptureSessionPreset540x960:{
                videoSize = CGSizeMake(540, 960);
            }
                break;
            case LFCaptureSessionPreset720x1280:{
                videoSize = CGSizeMake(720, 1280);
            }
                break;
                
            default:{
                videoSize = CGSizeMake(360, 640);
            }
                break;
        }
        
        if (self.landscape){
            return CGSizeMake(videoSize.height, videoSize.width);
        }
        return videoSize;
    }
    
    
    

    LFLiveAudioConfiguration、LFLiveVideoConfiguration初始化完毕后将值赋值制给LFLiveSession的内部属性audioConfiguration、videoConfiguration。这里还分别对_adaptiveBitra(The adaptiveBitrate control auto adjust bitrate. Default is NO )、 _captureType(采集类型结构体,分为内外音视频等)进行了赋值。

    音视频采集

    音视频参数设定完毕后,就要利用设定的参数来采集视频了。采集视频如下:

    session.running = YES;  //开始视频采集
            _session.preView = self.livingPreView;//采集视频将要呈现到的view
            _session.delegate = self;//LFLiveSessionDelegate代理
    

    看起来很的简单其实不然。在setRunning方法中会初始化音视频模块的相关资料。过程如下:

    #pragma mark -- Getter Setter
    - (void)setRunning:(BOOL)running {
        if (_running == running) return;
        [self willChangeValueForKey:@"running"];
        _running = running;
        [self didChangeValueForKey:@"running"];
        self.videoCaptureSource.running = _running;
        self.audioCaptureSource.running = _running;
    }
    
    - (LFAudioCapture *)audioCaptureSource {
        if (!_audioCaptureSource) {
            if(self.captureType & LFLiveCaptureMaskAudio){
                _audioCaptureSource = [[LFAudioCapture alloc] initWithAudioConfiguration:_audioConfiguration];
                _audioCaptureSource.delegate = self;
            }
        }
        return _audioCaptureSource;
    }
    
    - (LFVideoCapture *)videoCaptureSource {
        if (!_videoCaptureSource) {
            if(self.captureType & LFLiveCaptureMaskVideo){
                _videoCaptureSource = [[LFVideoCapture alloc] initWithVideoConfiguration:_videoConfiguration];
                _videoCaptureSource.delegate = self;
            }
        }
        return _videoCaptureSource;
    }
    
    
    

    整个的采集流程大概如下。

    session-> setRunning.png

    说明

    • 这里的videoCaptureSource,audioCaptureSource使用了懒加载方式,并不一定在setRunning方法中初始化.如果我们将 _session.preView 放在_session.running 前面videoCaptureSource,audioCaptureSource将在session.preView时掉用。
    • videoCaptureSource,audioCaptureSource方法中的设置音视频代理_audioCaptureSource.delegate = self;、_videoCaptureSource.delegate = self;是我们音视频采集的代理。可以在这里获取音视频数据。
    • self.videoCaptureSource.running = _running;
      self.audioCaptureSource.running = _running;完成初始化以后会分别掉用音视频模块的running 方法。

    videoCaptureSource、audioCaptureSource 初始化完毕以后开始掉用各自的run方法,下面我们分别说明。

    视频采集流程

    视频采集是在调用LFVideoCapture(实例为:videoCaptureSource)的run方法以后。(视频采集在LFLiveKit框架内依赖的时GPUImage框架来完成。我们这里是说流程,GPUImage框架具体如何设置不在详细说明)

    - (void)setRunning:(BOOL)running {
        if (_running == running) return;
        _running = running;
        
        if (!_running) {
            [UIApplication sharedApplication].idleTimerDisabled = NO;
            [self.videoCamera stopCameraCapture];
            if(self.saveLocalVideo) [self.movieWriter finishRecording];
        } else {
            [UIApplication sharedApplication].idleTimerDisabled = YES;
            [self reloadFilter];
            [self.videoCamera startCameraCapture];
            if(self.saveLocalVideo) [self.movieWriter startRecording];
        }
    }
    
    
    

    这里最主要的是 [self reloadFilter]和[self.videoCamera startCameraCapture];方法,
    [self reloadFilter]方法对视频做了相应的处理并且设置了视频采集完毕后的回掉函数。
    [self.videoCamera startCameraCapture]开始采集视频(GPUImge)

    - (void)reloadFilter{
        [self.filter removeAllTargets];
        [self.blendFilter removeAllTargets];
        [self.uiElementInput removeAllTargets];
        [self.videoCamera removeAllTargets];
        [self.output removeAllTargets];
        [self.cropfilter removeAllTargets];
        ///< 美颜
        。。。
        
        ///< 调节镜像
       。。。
        
        //< 添加水印
        if(self.warterMarkView){
            [self.filter addTarget:self.blendFilter];
            [self.uiElementInput addTarget:self.blendFilter];
            [self.blendFilter addTarget:self.gpuImageView];  //与采集到的视频显示相关
            if(self.saveLocalVideo) [self.blendFilter addTarget:self.movieWriter];
            [self.filter addTarget:self.output];
            [self.uiElementInput update];
        }else{
            [self.filter addTarget:self.output];
            [self.output addTarget:self.gpuImageView];//与采集到的视频显示相关
            if(self.saveLocalVideo) [self.output addTarget:self.movieWriter];
        }
    
        
        [self.filter forceProcessingAtSize:self.configuration.videoSize];
        [self.output forceProcessingAtSize:self.configuration.videoSize];
        [self.blendFilter forceProcessingAtSize:self.configuration.videoSize];
        [self.uiElementInput forceProcessingAtSize:self.configuration.videoSize];
        
        
        //< 输出数据  设置视频采集完毕后回掉
        __weak typeof(self) _self = self;
        [self.output setFrameProcessingCompletionBlock:^(GPUImageOutput *output, CMTime time) {
            [_self processVideo:output];
        }];
        
    }
    /// 采集完毕后回掉
    - (void)processVideo:(GPUImageOutput *)output {
        __weak typeof(self) _self = self;
        @autoreleasepool {
            GPUImageFramebuffer *imageFramebuffer = output.framebufferForOutput;
            CVPixelBufferRef pixelBuffer = [imageFramebuffer pixelBuffer];
            if (pixelBuffer && _self.delegate && [_self.delegate respondsToSelector:@selector(captureOutput:pixelBuffer:)]) {
                [_self.delegate captureOutput:_self pixelBuffer:pixelBuffer];//最终会掉我们之前初始化时设置的代理
            }
        }
    }
    
    
    
    /// 开始采集
    - (void)startCameraCapture;
    {
        if (![_captureSession isRunning])
        {
            startingCaptureTime = [NSDate date];
            [_captureSession startRunning];
        };
    }
    
    
    音频采集流程

    视频采集流程是在调用audioCaptureSource.running开始的。(音频采集主要依赖系统API)

    - (void)setRunning:(BOOL)running {
        if (_running == running) return;
        _running = running;
        if (_running) {
            dispatch_async(self.taskQueue, ^{
                self.isRunning = YES;
                NSLog(@"MicrophoneSource: startRunning");
                [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker | AVAudioSessionCategoryOptionInterruptSpokenAudioAndMixWithOthers error:nil];
                AudioOutputUnitStart(self.componetInstance);//开始采集音频,将音频放入self.componetInstance
            });
        } else {
            dispatch_sync(self.taskQueue, ^{
                self.isRunning = NO;
                NSLog(@"MicrophoneSource: stopRunning");
                AudioOutputUnitStop(self.componetInstance);
            });
        }
    }
    
    

    主要方法为AudioOutputUnitStart(self.componetInstance);开始采集音频,将音频放入self.componetInstance

    音频采集模块回掉的设置是在其初始化时完成的。

    - (instancetype)initWithAudioConfiguration:(LFLiveAudioConfiguration *)configuration{
        if(self = [super init]){
            。。。
            cb.inputProc = handleInputBuffer;//设置音频回掉
            。。。
        }
        return self;
    }
    
    #pragma mark -- CallBack
    static OSStatus handleInputBuffer(void *inRefCon,
                                      AudioUnitRenderActionFlags *ioActionFlags,
                                      const AudioTimeStamp *inTimeStamp,
                                      UInt32 inBusNumber,
                                      UInt32 inNumberFrames,
                                      AudioBufferList *ioData) {
        @autoreleasepool {
            LFAudioCapture *source = (__bridge LFAudioCapture *)inRefCon;
            if (!source) return -1;
    
            AudioBuffer buffer;
            buffer.mData = NULL;
            buffer.mDataByteSize = 0;
            buffer.mNumberChannels = 1;
    
            AudioBufferList buffers;
            buffers.mNumberBuffers = 1;
            buffers.mBuffers[0] = buffer;
    
            OSStatus status = AudioUnitRender(source.componetInstance,
                                              ioActionFlags,
                                              inTimeStamp,
                                              inBusNumber,
                                              inNumberFrames,
                                              &buffers);
    
            if (source.muted) {
                for (int i = 0; i < buffers.mNumberBuffers; i++) {
                    AudioBuffer ab = buffers.mBuffers[i];
                    memset(ab.mData, 0, ab.mDataByteSize);
                }
            }
    
            if (!status) {//最终会掉到我们音频模块初始化设置的代理
                if (source.delegate && [source.delegate respondsToSelector:@selector(captureOutput:audioData:)]) {
                    [source.delegate captureOutput:source audioData:[NSData dataWithBytes:buffers.mBuffers[0].mData length:buffers.mBuffers[0].mDataByteSize]];
                }
            }
            return status;
        }
    }
    
    
    
    

    handleInputBuffer函数将我们之前存储到self.componetInstance的音频数据去出来通过回掉返回到我们初始化音频模块时的代理函数中。

    音视频采集完毕后最终将掉用我们之前初始化音视频模块时设置的音视频代理,返回数据并作后续处理。如下:

    #pragma mark -- CaptureDelegate
    - (void)captureOutput:(nullable LFAudioCapture *)capture audioData:(nullable NSData*)audioData {
        if (self.uploading) [self.audioEncoder encodeAudioData:audioData timeStamp:NOW];
    }
    
    - (void)captureOutput:(nullable LFVideoCapture *)capture pixelBuffer:(nullable CVPixelBufferRef)pixelBuffer {
        if (self.uploading) [self.videoEncoder encodeVideoData:pixelBuffer timeStamp:NOW];
    }
    
    

    self.uploading 是一个bool型变量,默认为NO,在我们直播建立socket链接后后为YES ,随后初始化音视频编码器(懒加载),进行音视频编码。编码完毕后利用socket发送。这一部分在推流直播部分介绍。这里说明一点视频显示的到view的代码并未在这里设置。

    setPreView比较简单,直接将视频显示的view赋值到初始化的是视频模块。

    // session
    - (void)setPreView:(UIView *)preView {
        [self willChangeValueForKey:@"preView"];
        [self.videoCaptureSource setPreView:preView];
        [self didChangeValueForKey:@"preView"];
    }
    //videoCaptureSource
    - (void)setPreView:(UIView *)preView {
        if (self.gpuImageView.superview) [self.gpuImageView removeFromSuperview];
        [preView insertSubview:self.gpuImageView atIndex:0];
        self.gpuImageView.frame = CGRectMake(0, 0, preView.frame.size.width, preView.frame.size.height);
    }
    
    

    设置视频显示的view时会向我们设置的view中插入一个self.gpuImageView(@interface GPUImageView : UIView <GPUImageInput>) 这个view是真正显示视频的view,而这个view在视频采集是的reloadFilter方法中有设置。如下

    if(self.warterMarkView){
            [self.filter addTarget:self.blendFilter];
            [self.uiElementInput addTarget:self.blendFilter];
            [self.blendFilter addTarget:self.gpuImageView];  //与采集到的视频显示相关
            if(self.saveLocalVideo) [self.blendFilter addTarget:self.movieWriter];
            [self.filter addTarget:self.output];
            [self.uiElementInput update];
        }else{
            [self.filter addTarget:self.output];
            [self.output addTarget:self.gpuImageView];//与采集到的视频显示相关
            if(self.saveLocalVideo) [self.output addTarget:self.movieWriter];
        }
    

    _session.delegate :设置代理也比较简单。这里的代理并不是视频采集代理,而是视频socket链接的代理(其掉用过程我们将在后面流程介绍)。
    LFLiveSessionDelegate代理用于监控采集状态和错误信息,如下。

    @protocol LFLiveSessionDelegate <NSObject>
    
    @optional
    /** live status changed will callback */
    - (void)liveSession:(nullable LFLiveSession *)session liveStateDidChange:(LFLiveState)state;
    /** live debug info callback */
    - (void)liveSession:(nullable LFLiveSession *)session debugInfo:(nullable LFLiveDebug *)debugInfo;
    /** callback socket errorcode */
    - (void)liveSession:(nullable LFLiveSession *)session errorCode:(LFLiveSocketErrorCode)errorCode;
    @end
    
    

    上述初始化过程完成后,我们应当已经能够在我们要现实的view上看到前置摄像头采集到的我们自己的图像。美颜功已经开启了。。。

    音视频编码

    上面提到音视频采集完毕以后回调用LFVideoCapture,LFAudioCapture的代理将数据反回来。根据self.uploading来决定是否进行音视频编码。下面我们介绍编码流程。
    LFLiveKit为音视频编码设计了两个协议LFAudioEncoding、LFVideoEncoding。具体实现类为LFHardwareAudioEncoder、LFHardwareVideoEncoder(LFH264VideoEncoder,iOS8以下)。
    现在的iOS设备基本到在iOS8以上了 所以我们这里只分析LFHardwareVideoEncoder。

    音频编码

    音频编码分为过程

    1. 初始化LFHardwareAudioEncoder编码器。
    2. 创建acc编码器。
    3. 音频数据分段。
    4. 分段数据编码。
    5. 将编码数据封装为LFAudioFrame 回掉代理,供推流使用。

    初始化LFHardwareAudioEncoder编码器
    首先懒加载看是否出实话音频编码器并设置代理。

    - (id<LFAudioEncoding>)audioEncoder {
        if (!_audioEncoder) {
            _audioEncoder = [[LFHardwareAudioEncoder alloc] initWithAudioStreamConfiguration:_audioConfiguration];
            [_audioEncoder setDelegate:self];
        }
        return _audioEncoder;
    }
    
    - (instancetype)initWithAudioStreamConfiguration:(nullable LFLiveAudioConfiguration *)configuration {
        if (self = [super init]) {
            NSLog(@"USE LFHardwareAudioEncoder");
            _configuration = configuration;
            
            if (!leftBuf) {
                leftBuf = malloc(_configuration.bufferLength);
            }
            
            if (!aacBuf) {
                aacBuf = malloc(_configuration.bufferLength);
            }
            
            
    #ifdef DEBUG
            enabledWriteVideoFile = NO;
            [self initForFilePath];
    #endif
        }
        return self;
    }
    
    

    音频编码器初始化完毕后开始调用编码函数。- (void)encodeAudioData:(nullable NSData*)audioData timeStamp:(uint64_t)timeStamp 。函数会先判断是否创建Apple的acc编码器,如果没有则进行创建

    创建acc编码器

    - (BOOL)createAudioConvert { //根据输入样本初始化一个编码转换器
        if (m_converter != nil) {
            return TRUE;
        }
        // 音频输入描述
        AudioStreamBasicDescription inputFormat = {0};
        inputFormat.mSampleRate = _configuration.audioSampleRate;// 采样率
        inputFormat.mFormatID = kAudioFormatLinearPCM;// 数据格式
        inputFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;// 格式标识
        inputFormat.mChannelsPerFrame = (UInt32)_configuration.numberOfChannels;// 声道数
        inputFormat.mFramesPerPacket = 1;//packet中包含的frame数目,无压缩时为1,可变比特率时,一个大点儿的固定值例如在ACC中1024。
        inputFormat.mBitsPerChannel = 16;// 每个声道比特数,语音每采样点占用位数 
        inputFormat.mBytesPerFrame = inputFormat.mBitsPerChannel / 8 * inputFormat.mChannelsPerFrame;// 每帧多少字节
        inputFormat.mBytesPerPacket = inputFormat.mBytesPerFrame * inputFormat.mFramesPerPacket;// 一个packet中的字节数目,如果时可变的packet则为0
        
        // 音频输出描述
        AudioStreamBasicDescription outputFormat; // 这里开始是输出音频格式
        memset(&outputFormat, 0, sizeof(outputFormat));// 初始化
        outputFormat.mSampleRate = inputFormat.mSampleRate;       // 采样率保持一致
        outputFormat.mFormatID = kAudioFormatMPEG4AAC;            // AAC编码 kAudioFormatMPEG4AAC kAudioFormatMPEG4AAC_HE_V2
        outputFormat.mChannelsPerFrame = (UInt32)_configuration.numberOfChannels;;
        outputFormat.mFramesPerPacket = 1024;                     // AAC一帧是1024个字节
        
        const OSType subtype = kAudioFormatMPEG4AAC;
        //两种编码方式  软编码 硬编码
        AudioClassDescription requestedCodecs[2] = {
            {
                kAudioEncoderComponentType,
                subtype,
                kAppleSoftwareAudioCodecManufacturer// 软编码
            },
            {
                kAudioEncoderComponentType,
                subtype,
                kAppleHardwareAudioCodecManufacturer// 硬编码
            }
        };
        
        OSStatus result = AudioConverterNewSpecific(&inputFormat, &outputFormat, 2, requestedCodecs, &m_converter);//创建AudioConverter :输入描述,输出描述,requestedCodecs的数量,支持的编码方式,AudioConverter
        UInt32 outputBitrate = _configuration.audioBitrate;
        UInt32 propSize = sizeof(outputBitrate);
        
        
        if(result == noErr) {
            result = AudioConverterSetProperty(m_converter, kAudioConverterEncodeBitRate, propSize, &outputBitrate);//设置码率
        }
        
        return YES;
    }
    
    

    音频分段

    - (void)encodeAudioData:(nullable NSData*)audioData timeStamp:(uint64_t)timeStamp {
        if (![self createAudioConvert]) {//创建音频编码器
            return;
        }
        
        if(leftLength + audioData.length >= self.configuration.bufferLength){
            ///<  发送
            NSInteger totalSize = leftLength + audioData.length;
            NSInteger encodeCount = totalSize/self.configuration.bufferLength;
            char *totalBuf = malloc(totalSize);
            char *p = totalBuf;
            
            memset(totalBuf, (int)totalSize, 0);
            memcpy(totalBuf, leftBuf, leftLength);
            memcpy(totalBuf + leftLength, audioData.bytes, audioData.length);
            
            for(NSInteger index = 0;index < encodeCount;index++){
                [self encodeBuffer:p  timeStamp:timeStamp];
                p += self.configuration.bufferLength;
            }
            
            leftLength = totalSize%self.configuration.bufferLength;
            memset(leftBuf, 0, self.configuration.bufferLength);
            memcpy(leftBuf, totalBuf + (totalSize -leftLength), leftLength);
            
            free(totalBuf);
            
        }else{
            ///< 积累
            memcpy(leftBuf+leftLength, audioData.bytes, audioData.length);
            leftLength = leftLength + audioData.length;
        }
    }
    
    

    首先要明确一点,音频数据是以我们设置的音频参configuration.bufferLength大小来上传的。当我们的回掉收到音频数据后先判断leftLength+audioData.length的大小是否超过我们设置的bufferLength大小,其中leftLength为全局变量用于记录上一次发送bufferLengthleftLength大小的数据后剩余的数据大小(如果是第一次则为0),如果没有超过则进行数据积累,如下:

    memcpy(leftBuf+leftLength, audioData.bytes, audioData.length);
    leftLength = leftLength + audioData.length;
    

    将数据拷贝到leftBuf+leftLength的位置,其中leftBuf为全局变量用于记录上次剩余,随后改变leftLength的位置。
    当再次接受到数据时再次判断leftLength+audioData.length>=我们设置的大小。如果没满足则继续积累。如果已经满足则证明数据可以发送了。
    首先记录这次要发送的数据大小,上次剩余+此次数据。

    NSInteger totalSize = leftLength + audioData.length;
    

    这可能存两种情况,如果audioData.length比较小, totalSize只是bufferLength的1.x倍,如果audioData.length比较大,totalSize可能会是bufferLengthleft多倍,后者就仍然需要分批发送。

    NSInteger encodeCount = totalSize/self.configuration.bufferLength;
    

    这句话用于记录这次发送到底有应该啊有多少批次。
    随后发送数据,首先为这次的数据申请内存,声明一个变量记录发送的起始位置,如下:

    char *totalBuf = malloc(totalSize);
            char *p = totalBuf;
    

    随后将我们申请进行初始化操作,将上次剩余的数据copy到totalBuf,将此次数据copy到totalBuf。完成这些后开始发送数据,这里用for循环分批发送(totalSize可能会是bufferLength多倍情况), [self encodeBuffer:p timeStamp:timeStamp];用于发送数据,从p的位置开始发送,发送bufferLength大小。

            memset(totalBuf, (int)totalSize, 0);//初始化totalBuf  这里源码写错了应该为 memset(totalBuf,  0,(int)totalSize,);
            memcpy(totalBuf, leftBuf, leftLength);//copy上次剩余
            memcpy(totalBuf + leftLength, audioData.bytes, audioData.length);//copy此次数据
            
            for(NSInteger index = 0;index < encodeCount;index++){
                [self encodeBuffer:p  timeStamp:timeStamp];
                p += self.configuration.bufferLength;
            }
            
    

    发送完毕后,改变发送剩余数据大小leftLength,将leftBuf起始位置到bufferLength置0,将totalBuf剩余的放入leftBuf当中,下次发送。

    leftLength = totalSize%self.configuration.bufferLength;
            memset(leftBuf, 0, self.configuration.bufferLength);
            memcpy(leftBuf, totalBuf + (totalSize -leftLength), leftLength);
            
            free(totalBuf);
    

    编码 封装为LFAudioFrame

    - (void)encodeBuffer:(char*)buf timeStamp:(uint64_t)timeStamp{
        
        AudioBuffer inBuffer;
        inBuffer.mNumberChannels = 1;
        inBuffer.mData = buf;
        inBuffer.mDataByteSize = (UInt32)self.configuration.bufferLength;
        
        // 初始化一个输入缓冲列表 
        AudioBufferList buffers;
        buffers.mNumberBuffers = 1;//只有一个inBuffer
        buffers.mBuffers[0] = inBuffer;
        
        
        // 初始化一个输出缓冲列表 
        AudioBufferList outBufferList;
        outBufferList.mNumberBuffers = 1;//只有一个outBuffer
        outBufferList.mBuffers[0].mNumberChannels = inBuffer.mNumberChannels;
        outBufferList.mBuffers[0].mDataByteSize = inBuffer.mDataByteSize;   // 设置缓冲区大小
        outBufferList.mBuffers[0].mData = aacBuf;           // 设置AAC缓冲区 编码后数据存放的位置
        UInt32 outputDataPacketSize = 1;
        if (AudioConverterFillComplexBuffer(m_converter, inputDataProc, &buffers, &outputDataPacketSize, &outBufferList, NULL) != noErr) {
            return;
        }
        //封装为LFAudioFrame方便以后推流使用
        LFAudioFrame *audioFrame = [LFAudioFrame new];
        audioFrame.timestamp = timeStamp;
        audioFrame.data = [NSData dataWithBytes:aacBuf length:outBufferList.mBuffers[0].mDataByteSize];
        
        char exeData[2];//flv编码音频头 44100 为0x12 0x10
        exeData[0] = _configuration.asc[0];
        exeData[1] = _configuration.asc[1];
        audioFrame.audioInfo = [NSData dataWithBytes:exeData length:2];
        if (self.aacDeleage && [self.aacDeleage respondsToSelector:@selector(audioEncoder:audioFrame:)]) {
            [self.aacDeleage audioEncoder:self audioFrame:audioFrame];//调用编码完成后代理
        }
        
        if (self->enabledWriteVideoFile) {//写入本地文件中,debug时调用
            NSData *adts = [self adtsData:_configuration.numberOfChannels rawDataLength:audioFrame.data.length];
            fwrite(adts.bytes, 1, adts.length, self->fp);
            fwrite(audioFrame.data.bytes, 1, audioFrame.data.length, self->fp);
        }
        
    }
    
    #pragma mark -- AudioCallBack
    OSStatus inputDataProc(AudioConverterRef inConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData, AudioStreamPacketDescription * *outDataPacketDescription, void *inUserData) { //<span style="font-family: Arial, Helvetica, sans-serif;">AudioConverterFillComplexBuffer 编码过程中,会要求这个函数来填充输入数据,也就是原始PCM数据</span>
        AudioBufferList bufferList = *(AudioBufferList *)inUserData;
        ioData->mBuffers[0].mNumberChannels = 1;
        ioData->mBuffers[0].mData = bufferList.mBuffers[0].mData;
        ioData->mBuffers[0].mDataByteSize = bufferList.mBuffers[0].mDataByteSize;
        return noErr;
    }
    
    

    参考文献:
    Audio Converter Services
    1小时学会:最简单的iOS直播推流(七)h264/aac 硬编码

    视频编码

    视频编码过程分为以下几个

    1. 视频编码器LFHardwareVideoEncoder初始化
    2. 编码
    3. 封装

    初始化
    首先懒加载看是否初始化音频编码器并设置代理。视频编码LFLiveKit给我们提供了两种方式,iOS 8以下使用LFH264VideoEncoder,iOS8及其以上使用LFHardwareVideoEncoder,由于篇幅原因我们只分析LFHardwareVideoEncoder方式。

    视频编码器初始化过程比音频编码初始化过程做的事情要多。前者在初始化过程完成了Apple h.264编码器的初始化过程。并设置了编码完成后的代理。 具体初始化过程如下

    - (id<LFVideoEncoding>)videoEncoder {
        if (!_videoEncoder) {
            if([[UIDevice currentDevice].systemVersion floatValue] < 8.0){
                _videoEncoder = [[LFH264VideoEncoder alloc] initWithVideoStreamConfiguration:_videoConfiguration];
            }else{
                _videoEncoder = [[LFHardwareVideoEncoder alloc] initWithVideoStreamConfiguration:_videoConfiguration];
            }
            [_videoEncoder setDelegate:self];
        }
        return _videoEncoder;
    }
    
    
    - (instancetype)initWithVideoStreamConfiguration:(LFLiveVideoConfiguration *)configuration {
        if (self = [super init]) {
            NSLog(@"USE LFHardwareVideoEncoder");
            _configuration = configuration;
            [self resetCompressionSession];//初始化关键方法
            //接受通知
            [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(willEnterBackground:) name:UIApplicationWillResignActiveNotification object:nil];
            [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(willEnterForeground:) name:UIApplicationDidBecomeActiveNotification object:nil];
    #ifdef DEBUG
            enabledWriteVideoFile = NO;
            [self initForFilePath];
    #endif
            
        }
        return self;
    }
    
    - (void)resetCompressionSession {//重置VTCompressionSessionRef
        if (compressionSession) {//存在则设置成空重新生成
            VTCompressionSessionCompleteFrames(compressionSession, kCMTimeInvalid);//停止编码
    
            VTCompressionSessionInvalidate(compressionSession);
            CFRelease(compressionSession);
            compressionSession = NULL;
        }
    
        //创建VTCompressionSessionRef用于编码h.264  VideoCompressonOutputCallback为编码完成后回掉
        OSStatus status = VTCompressionSessionCreate(NULL, _configuration.videoSize.width, _configuration.videoSize.height, kCMVideoCodecType_H264, NULL, NULL, NULL, VideoCompressonOutputCallback, (__bridge void *)self, &compressionSession);
        if (status != noErr) {
            return;
        }
    
        //设置VTCompressionSessionRef参数
        _currentVideoBitRate = _configuration.videoBitRate;
        // 设置最大关键帧间隔,即gop size
        VTSessionSetProperty(compressionSession, kVTCompressionPropertyKey_MaxKeyFrameInterval, (__bridge CFTypeRef)@(_configuration.videoMaxKeyframeInterval));
        // 
        VTSessionSetProperty(compressionSession, kVTCompressionPropertyKey_MaxKeyFrameIntervalDuration, (__bridge CFTypeRef)@(_configuration.videoMaxKeyframeInterval/_configuration.videoFrameRate));
        // 设置帧率,只用于初始化session,不是实际FPS
        VTSessionSetProperty(compressionSession, kVTCompressionPropertyKey_ExpectedFrameRate, (__bridge CFTypeRef)@(_configuration.videoFrameRate));
        // 设置编码码率(比特率),如果不设置,默认将会以很低的码率编码,导致编码出来的视频很模糊
        VTSessionSetProperty(compressionSession, kVTCompressionPropertyKey_AverageBitRate, (__bridge CFTypeRef)@(_configuration.videoBitRate));
        // 设置数据速率限制
        NSArray *limit = @[@(_configuration.videoBitRate * 1.5/8), @(1)];// CFArray[CFNumber], [bytes, seconds, bytes, seconds...]
        VTSessionSetProperty(compressionSession, kVTCompressionPropertyKey_DataRateLimits, (__bridge CFArrayRef)limit);
        // 设置实时编码输出,降低编码延迟
        VTSessionSetProperty(compressionSession, kVTCompressionPropertyKey_RealTime, kCFBooleanTrue);
        // h264 profile, 直播一般使用baseline,可减少由于b帧带来的延时
        VTSessionSetProperty(compressionSession, kVTCompressionPropertyKey_ProfileLevel, kVTProfileLevel_H264_Main_AutoLevel);
        // 设置允许帧重新排序
        VTSessionSetProperty(compressionSession, kVTCompressionPropertyKey_AllowFrameReordering, kCFBooleanTrue);
        // 设置编码类型h.264
        VTSessionSetProperty(compressionSession, kVTCompressionPropertyKey_H264EntropyMode, kVTH264EntropyMode_CABAC);
        // 准备编码
        VTCompressionSessionPrepareToEncodeFrames(compressionSession);
    
    }
    
    

    编码

    视频编码器初始化完毕后开始调用编码函数。编码过程如下,主要用到一个函数VTCompressionSessionEncodeFrame。

    #pragma mark -- LFVideoEncoder
    - (void)encodeVideoData:(CVPixelBufferRef)pixelBuffer timeStamp:(uint64_t)timeStamp {
        if(_isBackGround) return;
        frameCount++;//帧计数
        
        CMTime presentationTimeStamp = CMTimeMake(frameCount, (int32_t)_configuration.videoFrameRate);//根据帧率和帧数生成描述时间戳
        VTEncodeInfoFlags flags;
        CMTime duration = CMTimeMake(1, (int32_t)_configuration.videoFrameRate);// 该帧描述持续时间
    
        NSDictionary *properties = nil;
        if (frameCount % (int32_t)_configuration.videoMaxKeyframeInterval == 0) {
            //如果该帧是最大帧率的倍数 强制转换为关键帧
            properties = @{(__bridge NSString *)kVTEncodeFrameOptionKey_ForceKeyFrame: @YES};
        }
        NSNumber *timeNumber = @(timeStamp);
    
        //开始编码
        //presentationTimeStamp  该帧描述时间戳
        //duration  该帧描述持续时间
        //properties  编码该帧时额外属性键值对
        //timeNumber  你为该帧关联的值,这里用CACurrentMediaTime()*1000  ,该值会传递到回掉函数VTFrameRef
        //flags 编码操作的信息 NULL不接受信息 kVTEncodeInfo_Asynchronous异步 kVTEncodeInfo_FrameDropped
        OSStatus status = VTCompressionSessionEncodeFrame(compressionSession, pixelBuffer, presentationTimeStamp, duration, (__bridge CFDictionaryRef)properties, (__bridge_retained void *)timeNumber, &flags);
        if(status != noErr){//如果失败 重置compressionSession
            [self resetCompressionSession];
        }
    }
    
    

    封装
    编码完成后会掉用我们初始化时设置的回掉函数VideoCompressonOutputCallback。

    #pragma mark -- VideoCallBack
    static void VideoCompressonOutputCallback(void *VTref, void *VTFrameRef, OSStatus status, VTEncodeInfoFlags infoFlags, CMSampleBufferRef sampleBuffer){
        if (!sampleBuffer) return;
        CFArrayRef array = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, true);
        if (!array) return;
        CFDictionaryRef dic = (CFDictionaryRef)CFArrayGetValueAtIndex(array, 0);
        if (!dic) return;
    
        // 判断当前帧是否为关键帧
        BOOL keyframe = !CFDictionaryContainsKey(dic, kCMSampleAttachmentKey_NotSync);
        uint64_t timeStamp = [((__bridge_transfer NSNumber *)VTFrameRef) longLongValue];
    
        LFHardwareVideoEncoder *videoEncoder = (__bridge LFHardwareVideoEncoder *)VTref;
        if (status != noErr) {
            return;
        }
    
        if (keyframe && !videoEncoder->sps) {//是关键帧,并且尚未设置sps(序列参数集)
            CMFormatDescriptionRef format = CMSampleBufferGetFormatDescription(sampleBuffer);
    
            size_t sparameterSetSize, sparameterSetCount;
            //获取sps
            const uint8_t *sparameterSet;
            OSStatus statusCode = CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, 0, &sparameterSet, &sparameterSetSize, &sparameterSetCount, 0);
            if (statusCode == noErr) {
                size_t pparameterSetSize, pparameterSetCount;
                //获取pps
                const uint8_t *pparameterSet;
                OSStatus statusCode = CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, 1, &pparameterSet, &pparameterSetSize, &pparameterSetCount, 0);
                if (statusCode == noErr) {
                    videoEncoder->sps = [NSData dataWithBytes:sparameterSet length:sparameterSetSize];//设置sps
                    videoEncoder->pps = [NSData dataWithBytes:pparameterSet length:pparameterSetSize];//这只pps
                    //数据处理时,sps pps 数据可以作为一个普通h264帧,放在h264视频流的最前面。
                    //如果保存到文件中,需要将此数据前加上 [0 0 0 1] 4个字节,写入到h264文件的最前面。
                    //如果推流,将此数据放入flv数据区即可。
    
                    if (videoEncoder->enabledWriteVideoFile) {//debug
                        NSMutableData *data = [[NSMutableData alloc] init];
                        uint8_t header[] = {0x00, 0x00, 0x00, 0x01};
                        [data appendBytes:header length:4];
                        [data appendData:videoEncoder->sps];
                        [data appendBytes:header length:4];
                        [data appendData:videoEncoder->pps];
                        fwrite(data.bytes, 1, data.length, videoEncoder->fp);
                    }
    
                }
            }
        }
    
    
        //获取视频数据
        CMBlockBufferRef dataBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
        size_t length, totalLength;
        char *dataPointer;
        //获取视频数据指针  数据大小 总数据大小
        OSStatus statusCodeRet = CMBlockBufferGetDataPointer(dataBuffer, 0, &length, &totalLength, &dataPointer);
        if (statusCodeRet == noErr) {
            size_t bufferOffset = 0;
            static const int AVCCHeaderLength = 4;
            // 循环获取nalu数据
            while (bufferOffset < totalLength - AVCCHeaderLength) {
                // Read the NAL unit length
                uint32_t NALUnitLength = 0;
                memcpy(&NALUnitLength, dataPointer + bufferOffset, AVCCHeaderLength);
                
    
                //大小端转化,关于大端和小端模式,请参考此网址:http://blog.csdn.net/sunjie886/article/details/54944810
                NALUnitLength = CFSwapInt32BigToHost(NALUnitLength);
    
                //封装视频数据LFVideoFrame,方便以后推流
                LFVideoFrame *videoFrame = [LFVideoFrame new];
                videoFrame.timestamp = timeStamp;
                videoFrame.data = [[NSData alloc] initWithBytes:(dataPointer + bufferOffset + AVCCHeaderLength) length:NALUnitLength];
                videoFrame.isKeyFrame = keyframe;
                videoFrame.sps = videoEncoder->sps;
                videoFrame.pps = videoEncoder->pps;
    
                //调用视频编码完成后的代理
                if (videoEncoder.h264Delegate && [videoEncoder.h264Delegate respondsToSelector:@selector(videoEncoder:videoFrame:)]) {
                    [videoEncoder.h264Delegate videoEncoder:videoEncoder videoFrame:videoFrame];
                }
    
                if (videoEncoder->enabledWriteVideoFile) {//debug
                    NSMutableData *data = [[NSMutableData alloc] init];
                    if (keyframe) {
                        uint8_t header[] = {0x00, 0x00, 0x00, 0x01};
                        [data appendBytes:header length:4];
                    } else {
                        uint8_t header[] = {0x00, 0x00, 0x01};
                        [data appendBytes:header length:3];
                    }
                    [data appendData:videoFrame.data];
    
                    fwrite(data.bytes, 1, data.length, videoEncoder->fp);
                }
    
    
                bufferOffset += AVCCHeaderLength + NALUnitLength;
    
            }
    
        }
    }
    
    
    

    参考文献:
    VideoToolbox
    iOS8系统H264视频硬件编解码说明

    推流

    推流过程可以分为以下几个过程:

    1. 建立socket链接
    2. 发送音视频数据
    建立socket链接

    完成编码以后我们会在编码代理EncoderDelegate得到了h.264编码每帧数据,但是要想将数据发送出去还必须要为这个数据建立通道。这个通道建立过程在我们调用session的startLive阶段完成。

    - (void)startLive {
        LFLiveStreamInfo *streamInfo = [LFLiveStreamInfo new];
        streamInfo.url = @"rtmp://10.10.1.71:1935/rtmplive/123";
    //    streamInfo.url = @"rtmp://202.69.69.180:443/live/123";
    //    http://qqpull99.inke.cn/live/1506399510572961.flv?ikHost=tx&ikOp=0&codecInfo=8192
        
        [self.session startLive:streamInfo];
    }
    
    
    #pragma mark -- CustomMethod
    - (void)startLive:(LFLiveStreamInfo *)streamInfo {
        if (!streamInfo) return;
        _streamInfo = streamInfo;
        _streamInfo.videoConfiguration = _videoConfiguration;
        _streamInfo.audioConfiguration = _audioConfiguration;
        [self.socket start];
    }
    
    

    这里的self.session 使用懒加载,前面初始化音视频参数时我们已经有过介绍,session初始化完毕后开始直播时,将外部传递过来的LFLiveStreamInfo赋值给LFLiveSession自己的属性,并且将自己内部音视频属性(初始化过程已经赋值)赋值给LFLiveSession内的音视频属性。完成上述操作后开始建立socket 。

    这里的socket是一个协议 ,具体是通过LFStreamRTMPSocket实现的。协议的方便之处就是这样,或许以后会有基于其他协议的新的实现,那时只需要修改很少的代码就能完成集成。

    - (id<LFStreamSocket>)socket {
        if (!_socket) {
            _socket = [[LFStreamRTMPSocket alloc] initWithStream:self.streamInfo reconnectInterval:self.reconnectInterval reconnectCount:self.reconnectCount];
            [_socket setDelegate:self];
        }
        return _socket;
    }
    
    
    - (nullable instancetype)initWithStream:(nullable LFLiveStreamInfo *)stream reconnectInterval:(NSInteger)reconnectInterval reconnectCount:(NSInteger)reconnectCount{
        if (!stream) @throw [NSException exceptionWithName:@"LFStreamRtmpSocket init error" reason:@"stream is nil" userInfo:nil];
        if (self = [super init]) {
            _stream = stream;
            if (reconnectInterval > 0) _reconnectInterval = reconnectInterval;
            else _reconnectInterval = RetryTimesMargin;
            
            if (reconnectCount > 0) _reconnectCount = reconnectCount;
            else _reconnectCount = RetryTimesBreaken;
            
            [self addObserver:self forKeyPath:@"isSending" options:NSKeyValueObservingOptionNew context:nil];//这里改成observer主要考虑一直到发送出错情况下,可以继续发送
        }
        return self;
    }
    
    

    这里应当注意的是,在我们对session初始化过程并为对self.reconnectInterval(重连间隔)、self.reconnectCount(重连次数)进行赋值所以这里的值都是0。所以会设置一个默认值

    static const NSInteger RetryTimesBreaken = 5;  ///<  重连1分钟  3秒一次 一共20次
    static const NSInteger RetryTimesMargin = 3;
    

    LFStreamRTMPSocket初始化完毕后设置socket代理: LFStreamSocketDelegate 监听soket状态变化,其中状态改变、error信息、debug信息会回掉给 sesson的LFLiveSessionDelegate代理还处理。缓冲区状态变化会自行调节码率。
    该代理方法如下

    /** callback socket current status (回调当前网络情况) */
    - (void)socketStatus:(nullable id<LFStreamSocket>)socket status:(LFLiveState)status {
        if (status == LFLiveStart) {//已经连接
            //如果没有开始上传  首先将下列参数设置初始值  设置开始上传YES
            if (!self.uploading) {
                self.AVAlignment = NO;//音视频是否对齐
                self.hasCaptureAudio = NO;//当前是否采集到了音频
                self.hasKeyFrameVideo = NO;//当前是否采集到了关键帧
                self.relativeTimestamps = 0;//上传相对时间戳
                self.uploading = YES;//是否开始上传
            }
        } else if(status == LFLiveStop || status == LFLiveError){
            //如果链接停止或链接错误  这是开始上传位NO
            self.uploading = NO;
        }
        dispatch_async(dispatch_get_main_queue(), ^{
            self.state = status;
            if (self.delegate && [self.delegate respondsToSelector:@selector(liveSession:liveStateDidChange:)]) {
                //代理回掉给session的代理LFLiveSessionDelegate处理
                [self.delegate liveSession:self liveStateDidChange:status];
            }
        });
    }
    
    /** 回掉socket 链接错误 */
    - (void)socketDidError:(nullable id<LFStreamSocket>)socket errorCode:(LFLiveSocketErrorCode)errorCode {
        dispatch_async(dispatch_get_main_queue(), ^{
            if (self.delegate && [self.delegate respondsToSelector:@selector(liveSession:errorCode:)]) {
                //代理回掉给session的代理LFLiveSessionDelegate处理
                [self.delegate liveSession:self errorCode:errorCode];
            }
        });
    }
    
    /** 回掉 调试信息 */
    - (void)socketDebug:(nullable id<LFStreamSocket>)socket debugInfo:(nullable LFLiveDebug *)debugInfo {
        self.debugInfo = debugInfo;
        if (self.showDebugInfo) {
            dispatch_async(dispatch_get_main_queue(), ^{
                if (self.delegate && [self.delegate respondsToSelector:@selector(liveSession:debugInfo:)]) {
                    //代理回掉给session的代理LFLiveSessionDelegate处理
                    [self.delegate liveSession:self debugInfo:debugInfo];
                }
            });
        }
    }
    
    /** callback buffer current status (回调当前缓冲区情况,可实现相关切换帧率 码率等策略)*/
    - (void)socketBufferStatus:(nullable id<LFStreamSocket>)socket status:(LFLiveBuffferState)status {
        if((self.captureType & LFLiveCaptureMaskVideo || self.captureType & LFLiveInputMaskVideo) && self.adaptiveBitrate){
            //获取当前码率
            NSUInteger videoBitRate = [self.videoEncoder videoBitRate];
            if (status == LFLiveBuffferDecline) {//如果缓冲区状态好
                //当前码率 < 初始化设置的码率
                if (videoBitRate < _videoConfiguration.videoMaxBitRate) {
                    //提升码率
                    videoBitRate = videoBitRate + 50 * 1000;
                    [self.videoEncoder setVideoBitRate:videoBitRate];
                    NSLog(@"Increase bitrate %@", @(videoBitRate));
                }
            } else {//如果缓冲区状态差 或者未知
                //当前码率 > 初始化设置的码率
                if (videoBitRate > self.videoConfiguration.videoMinBitRate) {
                    //降低码率
                    videoBitRate = videoBitRate - 100 * 1000;
                    [self.videoEncoder setVideoBitRate:videoBitRate];
                    NSLog(@"Decline bitrate %@", @(videoBitRate));
                }
            }
        }
    }
    
    

    到这里socket的相关设置已经设置完毕。随后进行rtmp连接。rtmp连接在socket的start方法中完成。

    - (void)_start {
        if (!_stream) return;
        if (_isConnecting) return;
        if (_rtmp != NULL) return;
        self.debugInfo.streamId = self.stream.streamId;
        self.debugInfo.uploadUrl = self.stream.url;
        self.debugInfo.isRtmp = YES;
        if (_isConnecting) return;
        
        _isConnecting = YES;
        if (self.delegate && [self.delegate respondsToSelector:@selector(socketStatus:status:)]) {
            [self.delegate socketStatus:self status:LFLivePending];
        }
        
        if (_rtmp != NULL) {
            PILI_RTMP_Close(_rtmp, &_error);
            PILI_RTMP_Free(_rtmp);
        }
        [self RTMP264_Connect:(char *)[_stream.url cStringUsingEncoding:NSASCIIStringEncoding]];
    }
    

    这里进行一系列判断,调用socket代理状态回掉LFLivePending。 真正进行连接的过程在[self RTMP264_Connect:(char *)[_stream.url cStringUsingEncoding:NSASCIIStringEncoding]];方法中。如下:

    - (NSInteger)RTMP264_Connect:(char *)push_url {
       //由于摄像头的timestamp是一直在累加,需要每次得到相对时间戳
       //分配与初始化
       _rtmp = PILI_RTMP_Alloc();
       PILI_RTMP_Init(_rtmp);
    
       //设置URL
       if (PILI_RTMP_SetupURL(_rtmp, push_url, &_error) == FALSE) {
           //log(LOG_ERR, "RTMP_SetupURL() failed!");
           goto Failed;
       }
    
       _rtmp->m_errorCallback = RTMPErrorCallback;
       _rtmp->m_connCallback = ConnectionTimeCallback;
       _rtmp->m_userData = (__bridge void *)self;
       _rtmp->m_msgCounter = 1;
       _rtmp->Link.timeout = RTMP_RECEIVE_TIMEOUT;
       
       //设置可写,即发布流,这个函数必须在连接前使用,否则无效
       PILI_RTMP_EnableWrite(_rtmp);
    
       //连接服务器
       if (PILI_RTMP_Connect(_rtmp, NULL, &_error) == FALSE) {
           goto Failed;
       }
    
       //连接流
       if (PILI_RTMP_ConnectStream(_rtmp, 0, &_error) == FALSE) {
           goto Failed;
       }
    
       if (self.delegate && [self.delegate respondsToSelector:@selector(socketStatus:status:)]) {
           [self.delegate socketStatus:self status:LFLiveStart];
       }
    
       [self sendMetaData];
    
       _isConnected = YES;
       _isConnecting = NO;
       _isReconnecting = NO;
       _isSending = NO;
       return 0;
    
    Failed:
       PILI_RTMP_Close(_rtmp, &_error);
       PILI_RTMP_Free(_rtmp);
       _rtmp = NULL;
       [self reconnect];
       return -1;
    }
    

    该函数开始掉用rtmp.c文件中的一些底层方法。建立连接。连接建立完毕以后进行掉用socket代理LFLiveStart,告诉session直播开始。这里还有掉用一个[self sendMetaData]方法改方法主要将我们设置的音视频参数封装成PILI_RTMPPacket发送出去。

    音视频信息发送

    音视频数据的发送过程是在音视频编码完毕之后实时进行发送的。

    - (void)audioEncoder:(nullable id<LFAudioEncoding>)encoder audioFrame:(nullable LFAudioFrame *)frame {
        //<上传  时间戳对齐
        if (self.uploading){
            self.hasCaptureAudio = YES;
            if(self.AVAlignment) [self pushSendBuffer:frame];
        }
    }
    
    - (void)videoEncoder:(nullable id<LFVideoEncoding>)encoder videoFrame:(nullable LFVideoFrame *)frame {
        //<上传 时间戳对齐
        if (self.uploading){
            if(frame.isKeyFrame && self.hasCaptureAudio) self.hasKeyFrameVideo = YES;
            if(self.AVAlignment) [self pushSendBuffer:frame];
        }
    }
    
    

    在soket连接建立完毕后回掉用socket的代理将socket状态告诉session。session 收到socket状态为LFLiveStart以后会将self.uploading设置为YES这样当有新的音视频流进入时掉用[self pushSendBuffer:frame]将音视频数据放入缓存区中。等待发送。

    - (void)pushSendBuffer:(LFFrame*)frame{
        if(self.relativeTimestamps == 0){
            self.relativeTimestamps = frame.timestamp;
        }
        frame.timestamp = [self uploadTimestamp:frame.timestamp];
        [self.socket sendFrame:frame];
    }
    
    - (void)sendFrame:(LFFrame *)frame {
        if (!frame) return;
        [self.buffer appendObject:frame];
        
        if(!self.isSending){
            [self sendFrame];
        }
    }
    

    这里会将frame添加到混存区(缓存策略稍后介绍),在self.isSenging为NO时开始发送。真正的主角是[self sendFrame]方法。

    - (void)sendFrame {
        __weak typeof(self) _self = self;
         dispatch_async(self.rtmpSendQueue, ^{
            if (!_self.isSending && _self.buffer.list.count > 0) {
                _self.isSending = YES;
    
                if (!_self.isConnected || _self.isReconnecting || _self.isConnecting || !_rtmp){
                    _self.isSending = NO;
                    return;
                }
    
                // 调用发送接口
                LFFrame *frame = [_self.buffer popFirstObject];
                if ([frame isKindOfClass:[LFVideoFrame class]]) {
                    if (!_self.sendVideoHead) {
                        _self.sendVideoHead = YES;
                        if(!((LFVideoFrame*)frame).sps || !((LFVideoFrame*)frame).pps){
                            _self.isSending = NO;
                            return;
                        }
                        [_self sendVideoHeader:(LFVideoFrame *)frame];
                    } else {
                        [_self sendVideo:(LFVideoFrame *)frame];
                    }
                } else {
                    if (!_self.sendAudioHead) {
                        _self.sendAudioHead = YES;
                        if(!((LFAudioFrame*)frame).audioInfo){
                            _self.isSending = NO;
                            return;
                        }
                        [_self sendAudioHeader:(LFAudioFrame *)frame];
                    } else {
                        [_self sendAudio:frame];
                    }
                }
    
                //debug更新
                _self.debugInfo.totalFrame++;
                _self.debugInfo.dropFrame += _self.buffer.lastDropFrames;
                _self.buffer.lastDropFrames = 0;
    
                _self.debugInfo.dataFlow += frame.data.length;
                _self.debugInfo.elapsedMilli = CACurrentMediaTime() * 1000 - _self.debugInfo.timeStamp;
                if (_self.debugInfo.elapsedMilli < 1000) {
                    _self.debugInfo.bandwidth += frame.data.length;
                    if ([frame isKindOfClass:[LFAudioFrame class]]) {
                        _self.debugInfo.capturedAudioCount++;
                    } else {
                        _self.debugInfo.capturedVideoCount++;
                    }
    
                    _self.debugInfo.unSendCount = _self.buffer.list.count;
                } else {
                    _self.debugInfo.currentBandwidth = _self.debugInfo.bandwidth;
                    _self.debugInfo.currentCapturedAudioCount = _self.debugInfo.capturedAudioCount;
                    _self.debugInfo.currentCapturedVideoCount = _self.debugInfo.capturedVideoCount;
                    if (_self.delegate && [_self.delegate respondsToSelector:@selector(socketDebug:debugInfo:)]) {
                        [_self.delegate socketDebug:_self debugInfo:_self.debugInfo];
                    }
                    _self.debugInfo.bandwidth = 0;
                    _self.debugInfo.capturedAudioCount = 0;
                    _self.debugInfo.capturedVideoCount = 0;
                    _self.debugInfo.timeStamp = CACurrentMediaTime() * 1000;
                }
                
                //修改发送状态
                dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
                    //< 这里只为了不循环调用sendFrame方法 调用栈是保证先出栈再进栈
                    _self.isSending = NO;
                });
                
            }
        });
    }
    
    

    改方法首先会将通过self.isSending状态去发送数据。首先从缓存中获取数据。

    LFFrame *frame = [_self.buffer popFirstObject];
    

    判断数据类型,如果是视频数据。还会再判断是否发送视频头信息如果没有则首先发送视频头信息,如果已经发送头信息则直接发送视频信息。同理音频数据也是如此。
    这里值得注意的是虽然我们的每一帧音视频数据中都包含了头信息,但是发送过程中却值发送了一次。
    发送音视频数据过程回掉用如下四个方法。

    [_self sendVideoHeader:(LFVideoFrame *)frame];
    
    [_self sendVideo:(LFVideoFrame *)frame];
    
    [_self sendAudioHeader:(LFAudioFrame *)frame];
    
      [_self sendAudio:frame];
    

    这四个方法最终都会掉用下面这个方法实际发送数据。

    - (NSInteger)sendPacket:(unsigned int)nPacketType data:(unsigned char *)data size:(NSInteger)size nTimestamp:(uint64_t)nTimestamp {
        NSInteger rtmpLength = size;
        PILI_RTMPPacket rtmp_pack;
        PILI_RTMPPacket_Reset(&rtmp_pack);
        PILI_RTMPPacket_Alloc(&rtmp_pack, (uint32_t)rtmpLength);
    
        rtmp_pack.m_nBodySize = (uint32_t)size;
        memcpy(rtmp_pack.m_body, data, size);
        rtmp_pack.m_hasAbsTimestamp = 0;
        rtmp_pack.m_packetType = nPacketType;
        if (_rtmp) rtmp_pack.m_nInfoField2 = _rtmp->m_stream_id;
        rtmp_pack.m_nChannel = 0x04;
        rtmp_pack.m_headerType = RTMP_PACKET_SIZE_LARGE;
        if (RTMP_PACKET_TYPE_AUDIO == nPacketType && size != 4) {
            rtmp_pack.m_headerType = RTMP_PACKET_SIZE_MEDIUM;
        }
        rtmp_pack.m_nTimeStamp = (uint32_t)nTimestamp;
    
        NSInteger nRet = [self RtmpPacketSend:&rtmp_pack];
    
        PILI_RTMPPacket_Free(&rtmp_pack);
        return nRet;
    }
    
    

    可以看到该方法改方法中大量掉用rmtp.c文件中的方法来完成最终数据发送。

    - (NSInteger)RtmpPacketSend:(PILI_RTMPPacket *)packet {
        if (_rtmp && PILI_RTMP_IsConnected(_rtmp)) {
            int success = PILI_RTMP_SendPacket(_rtmp, packet, 0, &_error);
            return success;
        }
        return -1;
    }
    

    注意: 建立连接发送数据的过程中都创建了线程,在子线程中完成。

    缓存策略

    上面提到过,socket连接建立后,新采集的音视频数据会首先放到缓存区中,随后从缓存区中取数据发送。LFLiveKit的缓存区设计并不单单设计成了一个先进先出的队列,音视频在加入缓存区时还会进行一些列的优化。

    LFLiveKit缓存主要做了两件事情。

    • 实时监测混存去状态,并反馈到session。
    • 将音视频数据添加到混存去等待视频被发送。
    - (void)appendObject:(LFFrame *)frame {
        if (!frame) return;
        if (!_startTimer) {
            _startTimer = YES;
            [self tick];
        }
    
        dispatch_semaphore_wait(_lock, DISPATCH_TIME_FOREVER);
        if (self.sortList.count < defaultSortBufferMaxCount) {
            [self.sortList addObject:frame];
        } else {
            ///< 排序
            [self.sortList addObject:frame];
            [self.sortList sortUsingFunction:frameDataCompare context:nil];
            /// 丢帧
            [self removeExpireFrame];
            /// 添加至缓冲区
            LFFrame *firstFrame = [self.sortList lfPopFirstObject];
    
            if (firstFrame) [self.list addObject:firstFrame];
        }
        dispatch_semaphore_signal(_lock);
    }
    

    第一次加入缓存的时候会掉用[self tick]函数,开始监测缓存区状态。
    如下:

    - (void)tick {
        /** 采样 3个阶段   如果网络都是好或者都是差给回调 */
        _currentInterval += self.updateInterval;
    
        dispatch_semaphore_wait(_lock, DISPATCH_TIME_FOREVER);
        [self.thresholdList addObject:@(self.list.count)];
        dispatch_semaphore_signal(_lock);
        
        if (self.currentInterval >= self.callBackInterval) {
            LFLiveBuffferState state = [self currentBufferState];
            if (state == LFLiveBuffferIncrease) {
                if (self.delegate && [self.delegate respondsToSelector:@selector(streamingBuffer:bufferState:)]) {
                    [self.delegate streamingBuffer:self bufferState:LFLiveBuffferIncrease];
                }
            } else if (state == LFLiveBuffferDecline) {
                if (self.delegate && [self.delegate respondsToSelector:@selector(streamingBuffer:bufferState:)]) {
                    [self.delegate streamingBuffer:self bufferState:LFLiveBuffferDecline];
                }
            }
    
            self.currentInterval = 0;
            [self.thresholdList removeAllObjects];
        }
        __weak typeof(self) _self = self;
        dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(self.updateInterval * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
            __strong typeof(_self) self = _self;
            [self tick];
        });
    }
    

    该函数主要进行网络状态的检测。可以看到该函数是一个递归函数,每隔self.updateInterval秒(默认1秒)掉用一次,每次在self.thresholdList数组中添加当前list中的frame个数。每隔elf.callBackInterval秒(默认5秒)通过[self currentBufferState]检测一次缓存区状态。然后将状态会回掉self.delegate,该回掉最终会反馈到session,这样我们就能够在session的代理中事实获取缓存区状态,进而调涨音视频参数,优化音视频质量。

    [self currentBufferState]如何检测缓存区状态呢?
    通过self.thresholdList中存储的缓存区frame数量变化来检测。
    生命两个变量,increaseCount, decreaseCount。如果self.thresholdList中后一个一直比前一个大increaseCount++ 否则decreaseCount++ 。如果increaseCount最后的值大于self.callBackInterval(实际为self.thresholdList数组的大小)。便认为缓存区状态良好。否则状态不良好。

    - (LFLiveBuffferState)currentBufferState {
        NSInteger currentCount = 0;
        NSInteger increaseCount = 0;
        NSInteger decreaseCount = 0;
    
        for (NSNumber *number in self.thresholdList) {
            if (number.integerValue > currentCount) {
                increaseCount++;
            } else{
                decreaseCount++;
            }
            currentCount = [number integerValue];
        }
    
        if (increaseCount >= self.callBackInterval) {
            return LFLiveBuffferIncrease;
        }
    
        if (decreaseCount >= self.callBackInterval) {
            return LFLiveBuffferDecline;
        }
        
        return LFLiveBuffferUnknown;
    }
    

    frame加入缓存区并不是单单的将frame add到self.list当中。在这之前会将fram加入到一个叫做sortList当中,知道sortList当中的数量大于等于defaultSortBufferMaxCount(默认是10)时。便将sortList根据时间戳排序。排序完毕后进行丢帧操作,去除不必要的数据。完成丢帧后再将sortList当中的第一个数据取出加入到list当中,等待发送。(这里应当注意如果下次再有数据进入奖直接进入到else语句。也就是说self.sortList将会一直保持大于defaultSortBufferMaxCount状态。并非每defaultSortBufferMaxCount进行一次排序)。

    丢帧操作如下

    - (void)removeExpireFrame {
        if (self.list.count < self.maxCount) return;
    
        NSArray *pFrames = [self expirePFrames];///< 第一个P到第一个I之间的p帧
        self.lastDropFrames += [pFrames count];
        if (pFrames && pFrames.count > 0) {
            [self.list removeObjectsInArray:pFrames];
            return;
        }
        
        NSArray *iFrames = [self expireIFrames];///<  删除一个I帧(但一个I帧可能对应多个nal)
        self.lastDropFrames += [iFrames count];
        if (iFrames && iFrames.count > 0) {
            [self.list removeObjectsInArray:iFrames];
            return;
        }
        
        [self.list removeAllObjects];
    }
    

    并不是每次掉用改函数都会丢帧。只有当缓存区数量大于maxCount(默认600,.h文件中写的是1000,.m文件中写的是600可能这里有些出入)时才会进行丢帧才进行丢帧。因为I帧是关键帧,通常不应该被删除,所以首先检测p帧。将p帧删除。如果没有p帧在考虑删除i帧。

    相关文章

      网友评论

      本文标题:LFLiveKit源码分析

      本文链接:https://www.haomeiwen.com/subject/yyysextx.html