美文网首页
iOS直播推流实现-推流

iOS直播推流实现-推流

作者: Oceanj | 来源:发表于2021-04-26 20:07 被阅读0次

    将最近学习的直播推流技术做个笔记。
    iOS推流的主要流程如下:

    1. 视频音频采集
    2. 视频美颜滤镜和贴纸
    3. 视频编码和音频编码
    4. 推流到流服务器

    这篇是直播推流的最后一篇

    推流

    上一篇介绍了音视频编解码,完成编解码后我们把数据分别封装到LFAudioFrame(音频数据)和LFVideoFrame(视频数据)中,接下来需要对这两数据进行推流。
    我们的推流工具使用的是librtmp,用来推rtmp协议流。

    1. 先跟流服务器建立连接,连接成功后,这个过程需要在初始化整个推流器时完成。
    2. 建立连接后需要把视频和音频元数据即音频和视频的相关参数推送过去,方便服务端解析。
    - (void)sendMetaData {
        PILI_RTMPPacket packet;
    
        char pbuf[2048], *pend = pbuf + sizeof(pbuf);
    
        packet.m_nChannel = 0x03;                   // control channel (invoke)
        packet.m_headerType = RTMP_PACKET_SIZE_LARGE;
        packet.m_packetType = RTMP_PACKET_TYPE_INFO;
        packet.m_nTimeStamp = 0;
        packet.m_nInfoField2 = _rtmp->m_stream_id;
        packet.m_hasAbsTimestamp = TRUE;
        packet.m_body = pbuf + RTMP_MAX_HEADER_SIZE;
    
        char *enc = packet.m_body;
        enc = AMF_EncodeString(enc, pend, &av_setDataFrame);
        enc = AMF_EncodeString(enc, pend, &av_onMetaData);
    
        *enc++ = AMF_OBJECT;
    
        enc = AMF_EncodeNamedNumber(enc, pend, &av_duration, 0.0);
        enc = AMF_EncodeNamedNumber(enc, pend, &av_fileSize, 0.0);
    
        // videosize
        enc = AMF_EncodeNamedNumber(enc, pend, &av_width, _stream.videoConfiguration.videoSize.width);
        enc = AMF_EncodeNamedNumber(enc, pend, &av_height, _stream.videoConfiguration.videoSize.height);
    
        // video
        enc = AMF_EncodeNamedString(enc, pend, &av_videocodecid, &av_avc1);
    
        enc = AMF_EncodeNamedNumber(enc, pend, &av_videodatarate, _stream.videoConfiguration.videoBitRate / 1000.f);
        enc = AMF_EncodeNamedNumber(enc, pend, &av_framerate, _stream.videoConfiguration.videoFrameRate);
    
        // audio
        enc = AMF_EncodeNamedString(enc, pend, &av_audiocodecid, &av_mp4a);
        enc = AMF_EncodeNamedNumber(enc, pend, &av_audiodatarate, _stream.audioConfiguration.audioBitrate);
    
        enc = AMF_EncodeNamedNumber(enc, pend, &av_audiosamplerate, _stream.audioConfiguration.audioSampleRate);
        enc = AMF_EncodeNamedNumber(enc, pend, &av_audiosamplesize, 16.0);
        enc = AMF_EncodeNamedBoolean(enc, pend, &av_stereo, _stream.audioConfiguration.numberOfChannels == 2);
    
        // sdk version
        enc = AMF_EncodeNamedString(enc, pend, &av_encoder, &av_SDKVersion);
    
        *enc++ = 0;
        *enc++ = 0;
        *enc++ = AMF_OBJECT_END;
    
        packet.m_nBodySize = (uint32_t)(enc - packet.m_body);
        if (!PILI_RTMP_SendPacket(_rtmp, &packet, FALSE, &_error)) {
            return;
        }
    }
    
    1. 接下来把Frame放到一个数组中,对数组根据时间排序,保证推流顺序
    2. 把数组第一条数据加入推流缓冲区。
    3. 从缓冲区取出第一条数据
    4. 如果是视频帧(LFVideoFrame),判断是否发送头信息,头信息即pps和sps,意思是推送帧数据时要先推pps和sps 否则后端无法正常解析数据,这个头信息只需要在一次连接会话中发送一次,如果中间出现断开重连需要重新推送pps和sps数据。
      推送pps和sps有格式要求的,代码如下:
      - (void)sendVideoHeader:(LFVideoFrame *)videoFrame {
    
        unsigned char *body = NULL;
        NSInteger iIndex = 0;
        NSInteger rtmpLength = 1024;
        const char *sps = videoFrame.sps.bytes;
        const char *pps = videoFrame.pps.bytes;
        NSInteger sps_len = videoFrame.sps.length;
        NSInteger pps_len = videoFrame.pps.length;
    
        body = (unsigned char *)malloc(rtmpLength);
        memset(body, 0, rtmpLength);
    
        body[iIndex++] = 0x17;
        body[iIndex++] = 0x00;
    
        body[iIndex++] = 0x00;
        body[iIndex++] = 0x00;
        body[iIndex++] = 0x00;
    
        body[iIndex++] = 0x01;
        body[iIndex++] = sps[1];
        body[iIndex++] = sps[2];
        body[iIndex++] = sps[3];
        body[iIndex++] = 0xff;
    
        /*sps*/
        body[iIndex++] = 0xe1;
        body[iIndex++] = (sps_len >> 8) & 0xff;
        body[iIndex++] = sps_len & 0xff;
        memcpy(&body[iIndex], sps, sps_len);
        iIndex += sps_len;
    
        /*pps*/
        body[iIndex++] = 0x01;
        body[iIndex++] = (pps_len >> 8) & 0xff;
        body[iIndex++] = (pps_len) & 0xff;
        memcpy(&body[iIndex], pps, pps_len);
        iIndex += pps_len;
    
        [self sendPacket:RTMP_PACKET_TYPE_VIDEO data:body size:iIndex nTimestamp:0];
        free(body);
    }
    

    然后再发送帧数据:

    - (void)sendVideo:(LFVideoFrame *)frame {
    
        NSInteger i = 0;
        NSInteger rtmpLength = frame.data.length + 9;
        unsigned char *body = (unsigned char *)malloc(rtmpLength);
        memset(body, 0, rtmpLength);
    
        if (frame.isKeyFrame) {
            body[i++] = 0x17;        // 1:Iframe  7:AVC
        } else {
            body[i++] = 0x27;        // 2:Pframe  7:AVC
        }
        body[i++] = 0x01;    // AVC NALU
        body[i++] = 0x00;
        body[i++] = 0x00;
        body[i++] = 0x00;
        body[i++] = (frame.data.length >> 24) & 0xff;
        body[i++] = (frame.data.length >> 16) & 0xff;
        body[i++] = (frame.data.length >>  8) & 0xff;
        body[i++] = (frame.data.length) & 0xff;
        memcpy(&body[i], frame.data.bytes, frame.data.length);
    
        [self sendPacket:RTMP_PACKET_TYPE_VIDEO data:body size:(rtmpLength) nTimestamp:frame.timestamp];
        free(body);
    }
    - (NSInteger)sendPacket:(unsigned int)nPacketType data:(unsigned char *)data size:(NSInteger)size nTimestamp:(uint64_t)nTimestamp {
        NSInteger rtmpLength = size;
        PILI_RTMPPacket rtmp_pack;
        PILI_RTMPPacket_Reset(&rtmp_pack);
        PILI_RTMPPacket_Alloc(&rtmp_pack, (uint32_t)rtmpLength);
    
        rtmp_pack.m_nBodySize = (uint32_t)size;
        memcpy(rtmp_pack.m_body, data, size);
        rtmp_pack.m_hasAbsTimestamp = 0;
        rtmp_pack.m_packetType = nPacketType;
        if (_rtmp) rtmp_pack.m_nInfoField2 = _rtmp->m_stream_id;
        rtmp_pack.m_nChannel = 0x04;
        rtmp_pack.m_headerType = RTMP_PACKET_SIZE_LARGE;
        if (RTMP_PACKET_TYPE_AUDIO == nPacketType && size != 4) {
            rtmp_pack.m_headerType = RTMP_PACKET_SIZE_MEDIUM;
        }
        rtmp_pack.m_nTimeStamp = (uint32_t)nTimestamp;
    
        NSInteger nRet = [self RtmpPacketSend:&rtmp_pack];
    
        PILI_RTMPPacket_Free(&rtmp_pack);
        return nRet;
    }
    

    推送时需要注意要把时间戳带上

    [self sendPacket:RTMP_PACKET_TYPE_VIDEO data:body size:(rtmpLength) nTimestamp:frame.timestamp];
    

    rtmp是以包为单位进行发送数据。

    7.如果是音频帧, 同样的逻辑,需要先推送头信息,才能推帧数据,代码如下

    - (void)sendAudioHeader:(LFAudioFrame *)audioFrame {
    
        NSInteger rtmpLength = audioFrame.audioInfo.length + 2;     /*spec data长度,一般是2*/
        unsigned char *body = (unsigned char *)malloc(rtmpLength);
        memset(body, 0, rtmpLength);
    
        /*AF 00 + AAC RAW data*/
        body[0] = 0xAF;
        body[1] = 0x00;
        memcpy(&body[2], audioFrame.audioInfo.bytes, audioFrame.audioInfo.length);          /*spec_buf是AAC sequence header数据*/
        [self sendPacket:RTMP_PACKET_TYPE_AUDIO data:body size:rtmpLength nTimestamp:0];
        free(body);
    }
    
    - (void)sendAudio:(LFFrame *)frame {
    
        NSInteger rtmpLength = frame.data.length + 2;    /*spec data长度,一般是2*/
        unsigned char *body = (unsigned char *)malloc(rtmpLength);
        memset(body, 0, rtmpLength);
    
        /*AF 01 + AAC RAW data*/
        body[0] = 0xAF;
        body[1] = 0x01;
        memcpy(&body[2], frame.data.bytes, frame.data.length);
        [self sendPacket:RTMP_PACKET_TYPE_AUDIO data:body size:rtmpLength nTimestamp:frame.timestamp];
        free(body);
    }
    
    

    不耍流氓,附上代码:
    Demo传送门

    相关文章

      网友评论

          本文标题:iOS直播推流实现-推流

          本文链接:https://www.haomeiwen.com/subject/gsvvcltx.html