美文网首页直播WebRtc
Android RTMP 投屏直播推流实现

Android RTMP 投屏直播推流实现

作者: 蒋斌文 | 来源:发表于2021-03-26 22:38 被阅读0次

    代码GitHub

    简介

    观看手游直播时,我们观众端看到的是选手的屏幕上的内容,这是如何实现的呢?这篇博客将手写一个录屏直播Demo,实现类似手游投屏直播的效果

    获取屏幕数据很简单,Android 系统有提供对应的服务,难点在于传输数据到直播服务器,我们使用 RtmpDump 来传输 Rtmp 数据,由于 RtmpDump 使用 C 语言实现,我们还需要用到 NDK 开发,单单用 Java 无法实现哈,当年没有使用开发实现的RTMP协议的开源库,所以在RTMP协议包上,使用了RtmpDump C库。

    实现效果

    实现效果

    基本流程

    • 获取录屏数据
    • 对数据进行 h264 编码
    • Rtmp 数据包
    • 上传到直播服务器推流地址
    RTMP直播实现流程

    获取录屏数据

    MediaProjection 视频采集 SDK中的接口

    A token granting applications the ability to capture screen contents and/or record system audio. The exact capabilities granted depend on the type of MediaProjection.

    A screen capture session can be started through MediaProjectionManager.createScreenCaptureIntent(). This grants the ability to capture screen contents, but not system audio.

    @Override
        protected void onActivityResult(int requestCode, int resultCode, Intent data) {
            super.onActivityResult(requestCode, resultCode, data);
            if (requestCode == 100 && resultCode == Activity.RESULT_OK) {
                if (editText.getText()!=null&&!TextUtils.isEmpty(editText.getText().toString())) {
                    url=editText.getText().toString();
                    Log.i("tuch", "url: "+url);
                }
                Log.i(TAG, " url:"+url);
                mediaProjection = mediaProjectionManager.getMediaProjection(resultCode, data);
            }
        }
    
        public void startLive(View view) {
            this.mediaProjectionManager = (MediaProjectionManager)getSystemService(Context.MEDIA_PROJECTION_SERVICE);
            Intent captureIntent = mediaProjectionManager.createScreenCaptureIntent();
            startActivityForResult(captureIntent, 100);
        }
    
    mediaProjectionManager.createScreenCaptureIntent

    获取录屏数据

    通过 Intent 获取到 MediaProjectionService,继而获取到 Mediaprojection 的 VirtualCanvas,我们录屏的原始数据就是从中得来的

    public VirtualDisplay createVirtualDisplay (String name,                 int width,                 int height,                 int dpi,                 int flags,                 Surface surface,                 VirtualDisplay.Callback callback,                 Handler handler)
    

    Creates a VirtualDisplay to capture the contents of the screen.

    createVirtualDisplay API

    mediaProjection--->产生录屏数据

    对数据进行 h264 编码

    通过 MediaProjection 获取到的 YUV 裸数据,我们先需要对其进行 h264 编码,此时我们使用原生 MediaCodec 进行硬编码,本例子是Demo工程,没有考虑编码兼容性的问题,直接使用 MediaCodecMediaCodec的使用参考 MediaCodec 介绍

    public void startLive(MediaProjection mediaProjection) {
            this.mediaProjection = mediaProjection;
    
            MediaFormat format = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC,
                    720,
                    1280);
            format.setInteger(MediaFormat.KEY_COLOR_FORMAT,
                    MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
            //码率,帧率,分辨率,关键帧间隔
            format.setInteger(MediaFormat.KEY_BIT_RATE, 400_000);
            format.setInteger(MediaFormat.KEY_FRAME_RATE, 15);
            format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 1);
            try {
                mediaCodec = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_VIDEO_AVC);//手机
                mediaCodec.configure(format, null, null,
                        MediaCodec.CONFIGURE_FLAG_ENCODE);
                Surface surface = mediaCodec.createInputSurface();
    
                virtualDisplay = mediaProjection.createVirtualDisplay(
                        "screen-codec",
                        720, 1280, 1,
                        DisplayManager.VIRTUAL_DISPLAY_FLAG_PUBLIC,
                        surface, null, null);
    
            } catch (IOException e) {
                e.printStackTrace();
            }
            LiveTaskManager.getInstance().execute(this);
        }
    

    Surface surface = mediaCodec.createInputSurface();从编码器创建一个画布, 画布上的图像会被编码器自动编码, 调用createVirtualDisplay创建虚拟显示器VirtualDisplay ,即会将手机屏幕镜像到虚拟显示器上。在createVirtualDisplay时,需要传递一个Surface(画布)。需要获取图像数据即可从这个Surface中读取。

    VirtualDisplay-Surface-MediaCodec

    配置完成后,从MediaCodec中获取数据

        @Override
        public void run() {
            isLiving = true;
            mediaCodec.start();
            MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
    
            while (isLiving){
                //若时间差大于 2 s,则通知编码器,生成 I 帧
                if (System.currentTimeMillis() - timeStamp >= 2000){
                    // Bundle 通知 Dsp
                    Bundle msgBundle = new Bundle();
                    msgBundle.putInt(MediaCodec.PARAMETER_KEY_REQUEST_SYNC_FRAME,0);
                    mediaCodec.setParameters(msgBundle);
                    timeStamp = System.currentTimeMillis();
                }
                // 接下来就是 MediaCodec 常规操作,获取 Buffer 可用索引,这里不需要获取输出索引,内部已经操作了
                int outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo,100_000);
                if (outputBufferIndex >=0){
                    // 获取到了
                    ByteBuffer byteBuffer = mediaCodec.getOutputBuffer(outputBufferIndex);
                    byte[] outData = new byte[bufferInfo.size];
                    byteBuffer.get(outData);
                }
            }
    

    VideoCodec线程的润方法中,不断的判int outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo,100_000);是否有编码完成的H264数据,为什么是h264编码的,在配置的时候,MediaFormat.MIMETYPE_VIDEO_AVC参数定义的!

    此时可以把好的数据,以byte的方式写入文件查看

    image-20210326175405628

    可以看到sps pps I帧等关键信息

    也可以通过ffplay命令进行播放

    ffplay -i codec.h264
    

    此时,我们获得了编码好的 h264 数据,接下来封装 Rtmp数据包。

    LIBRTMP

    C语言开源RTMP库,封装 Socket 建立TCP通信,并实现了RTMP数据的收发。

    RTMPDump

    rtmpdump is a toolkit for RTMP streams. All forms of RTMP are supported, including rtmp://, rtmpt://, rtmpe://, rtmpte://, and rtmps://.

    License: GPLv2
    Copyright (C) 2009 Andrej Stepanchuk
    Copyright (C) 2010-2011 Howard Chu

    Download the source:

    git clone git://git.ffmpeg.org/rtmpdump
    

    The latest release is 2.4 which you can check out from git. Aside from various minor bugfixes since 2.3, RTMPE type 9 handshakes are now supported.

    使用第三方库 Rtmpdump 来实现推流到直播服务器,由于 Rtmpdump 的代码量不是很多,我们直接拷贝源代码到 Android 的 cpp 文件

    #定义宏  如果代码中定义了 #defind NO_CRYPTO
    #就表示不适用ssl,不支持rtmps。我们这里不支持ssl
    set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DNO_CRYPTO")
    
    # 把当前目录下所有得文件 变成一个 SOURCE变量表示
    aux_source_directory(. SOURCE)
    # 编译成librtmp.a 静态库,编译源文件引用${SOURCE}获取
    add_library(rtmp STATIC ${SOURCE})
    

    工程cmake引入编译好的rtmp静态库

    # 加入子文件夹
    add_subdirectory(librtmp)
    # 链接引入
    target_link_libraries( # Specifies the target library.
                           native-lib
                           # Links the target library to the log library
                           # included in the NDK.
                           ${log-lib}
                             rtmp)
    

    RtmpDump 的使用

    RtmpDump
    • 连接服务器
    1. RTMP_Init(RTMP *r) 初始化
    2. RTMP_EnableWrite(RTMP *r) 配置开启数据写入
    3. RTMP_Connect(RTMP *r, RTMPPacket *cp)
    4. RTMP_ConnectStream(RTMP *r, int seekTime)
    • 发送数据
    1. RTMPPacket_Alloc(RTMPPacket *p, int nSize)

    2. RTMP_SendPacket(RTMP *r, RTMPPacket *packet, int queue)

    3. RTMPPacket_Free(RTMPPacket *p)

    • Rtmp 协议关键帧协议格式
    image
    • Rtmp 协议非关键帧协议格式
    image-20210326195045243
    • SPS PPS数据包
    sps-pps

    启动配置好的SRS推流服务器

    SRS推流服务器
    ./objs/srs -c conf/rtmp.conf
    lsof -i :1935
    

    连接直播服务器

    这一步中,需要预先准备直播推流地址,然后实现 native 方法

    extern "C"
    JNIEXPORT jboolean JNICALL
    Java_com_lecture_rtmtscreenlive_ScreenLive_connect(JNIEnv *env, jobject thiz, jstring url_) {
    
        // 首先 Java 的转成 C 的字符串,不然无法使用
        const char *url = env->GetStringUTFChars(url_, 0);
        int ret;
        do {
            live = (Live *) malloc(sizeof(Live));
            memset(live, 0, sizeof(Live));
            live->rtmp = RTMP_Alloc();// Rtmp 申请内存
            RTMP_Init(live->rtmp);
            live->rtmp->Link.timeout = 10;// 设置 rtmp 初始化参数,比如超时时间、url
            LOGI("connect %s", url);
            if (!(ret = RTMP_SetupURL(live->rtmp, (char *) url))) break;
            RTMP_EnableWrite(live->rtmp);// 开启 Rtmp 写入
            LOGI("RTMP_Connect");
            if (!(ret = RTMP_Connect(live->rtmp, 0))) break;
            LOGI("RTMP_ConnectStream ");
            if (!(ret = RTMP_ConnectStream(live->rtmp, 0))) break;
            LOGI("connect success");
        } while (0);
        if (!ret && live) {
            free(live);
            live = nullptr;
        }
    
        env->ReleaseStringUTFChars(url_, url);
        return ret;
    
    }
    
    2021-03-26 20:25:33.202 9139-9211/com.lecture.rtmtscreenlive I/DDDDDD: connect rtmp://192.168.10.224/live/livestream
    2021-03-26 20:25:33.202 9139-9211/com.lecture.rtmtscreenlive I/DDDDDD: RTMP_Connect
    2021-03-26 20:25:33.422 9139-9211/com.lecture.rtmtscreenlive I/DDDDDD: RTMP_ConnectStream 
    2021-03-26 20:25:33.562 9139-9211/com.lecture.rtmtscreenlive I/DDDDDD: connect success
    2021-03-26 20:25:34.152 9139-9222/com.lecture.rtmtscreenlive I/------>dddd<---------: run: -2
    2021-03-26 20:25:34.152 9139-9222/com.lecture.rtmtscreenlive I/------>dddd<---------: run: 0
    

    服务器连接成功,现在开始发送RTMP视频数据

    • 视频数据封包
    RTMPPacket *createVideoPackage(int8_t *buf, int len, const long tms, Live *live) {
    //    分隔符被抛弃了      --buf指的是651
        buf += 4;
        len -= 4;
        int body_size = len + 9;
        RTMPPacket *packet = (RTMPPacket *) malloc(sizeof(RTMPPacket));
        RTMPPacket_Alloc(packet, len + 9);
    
        packet->m_body[0] = 0x27;
        if (buf[0] == 0x65) { //关键帧
            packet->m_body[0] = 0x17;
            LOGI("发送关键帧 data");
        }
        packet->m_body[1] = 0x01;
        packet->m_body[2] = 0x00;
        packet->m_body[3] = 0x00;
        packet->m_body[4] = 0x00;
        //长度
        packet->m_body[5] = (len >> 24) & 0xff;
        packet->m_body[6] = (len >> 16) & 0xff;
        packet->m_body[7] = (len >> 8) & 0xff;
        packet->m_body[8] = (len) & 0xff;
    
        //数据
        memcpy(&packet->m_body[9], buf, len);
    
    
        packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
        packet->m_nBodySize = body_size;
        packet->m_nChannel = 0x04;
        packet->m_nTimeStamp = tms;
        packet->m_hasAbsTimestamp = 0;
        packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
        packet->m_nInfoField2 = live->rtmp->m_stream_id;
        return packet;
    
    }
    
    • Sps pps 封包
    RTMPPacket *createVideoPackage(Live *live) {
        int body_size = 13 + live->sps_len + 3 + live->pps_len;
        RTMPPacket *packet = (RTMPPacket *) malloc(sizeof(RTMPPacket));
        RTMPPacket_Alloc(packet, body_size);
        int i = 0;
        //AVC sequence header 与IDR一样
        packet->m_body[i++] = 0x17;
        //AVC sequence header 设置为0x00
        packet->m_body[i++] = 0x00;
        //CompositionTime
        packet->m_body[i++] = 0x00;
        packet->m_body[i++] = 0x00;
        packet->m_body[i++] = 0x00;
        //AVC sequence header
        packet->m_body[i++] = 0x01;   //configurationVersion 版本号 1
        packet->m_body[i++] = live->sps[1]; //profile 如baseline、main、 high
    
        packet->m_body[i++] = live->sps[2]; //profile_compatibility 兼容性
        packet->m_body[i++] = live->sps[3]; //profile level
        packet->m_body[i++] = 0xFF; // reserved(111111) + lengthSizeMinusOne(2位 nal 长度) 总是0xff
        //sps
        packet->m_body[i++] = 0xE1; //reserved(111) + lengthSizeMinusOne(5位 sps 个数) 总是0xe1
        //sps length 2字节
        packet->m_body[i++] = (live->sps_len >> 8) & 0xff; //第0个字节
        packet->m_body[i++] = live->sps_len & 0xff;        //第1个字节
        memcpy(&packet->m_body[i], live->sps, live->sps_len);
        i += live->sps_len;
    
        /*pps*/
        packet->m_body[i++] = 0x01; //pps number
        //pps length
        packet->m_body[i++] = (live->pps_len >> 8) & 0xff;
        packet->m_body[i++] = live->pps_len & 0xff;
        memcpy(&packet->m_body[i], live->pps, live->pps_len);
    
        packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
        packet->m_nBodySize = body_size;
        packet->m_nChannel = 0x04;
        packet->m_nTimeStamp = 0;
        packet->m_hasAbsTimestamp = 0;
        packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
        packet->m_nInfoField2 = live->rtmp->m_stream_id;
        return packet;
    }
    
    int sendPacket(RTMPPacket *packet) {
        int r = RTMP_SendPacket(live->rtmp, packet, 1);
        RTMPPacket_Free(packet);
        free(packet);
        return r;
    }
    
    void prepareVideo(int8_t *data, int len, Live *live) {
        for (int i = 0; i < len; i++) {
            //0x00 0x00 0x00 0x01
            if (i + 4 < len) {
                if (data[i] == 0x00 && data[i + 1] == 0x00
                    && data[i + 2] == 0x00
                    && data[i + 3] == 0x01) {
                    //0x00 0x00 0x00 0x01 7 sps 0x00 0x00 0x00 0x01 8 pps
                    //将sps pps分开
                    //找到pps
                    if (data[i + 4] == 0x68) {
                        //去掉界定符
                        live->sps_len = i - 4;
                        live->sps = static_cast<int8_t *>(malloc(live->sps_len));
                        memcpy(live->sps, data + 4, live->sps_len);
    
                        live->pps_len = len - (4 + live->sps_len) - 4;
                        live->pps = static_cast<int8_t *>(malloc(live->pps_len));
                        memcpy(live->pps, data + 4 + live->sps_len + 4, live->pps_len);
                        LOGI("sps:%d pps:%d", live->sps_len, live->pps_len);
                        break;
                    }
                }
            }
        }
    }
    

    测试

    ffplay rtmp://192.168.10.224/live/livestream
    
    image-20210326211259263

    获取音频数据

    • AudioRecord 采集
    //录音工具类  采样位数 通道数   采样评率   固定了   设备没关系  录音 数据一样的
                minBufferSize = AudioRecord.getMinBufferSize(44100,
                        AudioFormat.CHANNEL_IN_MONO,
                        AudioFormat.ENCODING_PCM_16BIT);
                audioRecord = new AudioRecord(
                        MediaRecorder.AudioSource.MIC, 44100,
                        AudioFormat.CHANNEL_IN_MONO,
                        AudioFormat.ENCODING_PCM_16BIT, minBufferSize);
    
    • 使用MediaCodec对采集的PCM数据编码
     MediaFormat format = MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, 44100, 1);
            //录音质量
            format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel
                    .AACObjectLC);
            //一秒的码率 aac
            format.setInteger(MediaFormat.KEY_BIT_RATE, 64_000);
    
    mediaCodec = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_AUDIO_AAC);
                mediaCodec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
                mediaCodec.start();
    
    • 麦克风的数据读取出来 pcm
     audioRecord.startRecording();
    //        容器 固定
            byte[] buffer = new byte[minBufferSize];
    //            麦克风的数据读取出来   pcm   buffer  aac
                int len = audioRecord.read(buffer, 0, buffer.length);
    
    • 获取经过mediaCodec编码的aac数据
    int index = mediaCodec.dequeueInputBuffer(0);
    if (index >= 0) {
        ByteBuffer inputBuffer = mediaCodec.getInputBuffer(index);
        inputBuffer.clear();
        inputBuffer.put(buffer, 0, len);
        //填充数据后再加入队列
        mediaCodec.queueInputBuffer(index, 0, len,
                System.nanoTime() / 1000, 0);
    }
    

    RTMP 包中封装的音视频数据流,其实和FLV/tag封装音频和视频数据的方式是相同的,所以我们只需要按照FLV格式封装音视频即可。

    image-20210326215407401
    RTMPPacket *createAudioPacket(int8_t *buf, const int len, const int type, const long tms,
                                  Live *live) {
    
    //    组装音频包  两个字节    是固定的   af    如果是第一次发  你就是 01       如果后面   00  或者是 01  aac
        int body_size = len + 2;
        RTMPPacket *packet = (RTMPPacket *) malloc(sizeof(RTMPPacket));
        RTMPPacket_Alloc(packet, body_size);
    //         音频头
        packet->m_body[0] = 0xAF;
        if (type == 1) {
    //        头
            packet->m_body[1] = 0x00;
        }else{
            packet->m_body[1] = 0x01;
        }
        memcpy(&packet->m_body[2], buf, len);
        packet->m_packetType = RTMP_PACKET_TYPE_AUDIO;
        packet->m_nChannel = 0x05;
        packet->m_nBodySize = body_size;
        packet->m_nTimeStamp = tms;
        packet->m_hasAbsTimestamp = 0;
        packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
        packet->m_nInfoField2 = live->rtmp->m_stream_id;
        return packet;
    }
    

    代码GitHub

    相关文章

      网友评论

        本文标题:Android RTMP 投屏直播推流实现

        本文链接:https://www.haomeiwen.com/subject/ghoshltx.html