美文网首页WebRTC音视频开发Android直播
Android WebRTC完整入门教程01: 使用相机

Android WebRTC完整入门教程01: 使用相机

作者: rome753 | 来源:发表于2019-02-19 23:32 被阅读4次

    WebRTC安卓端没有官方教程,甚至连API文档都没有。这是一件奇怪的事,毕竟WebRTC是Google开发的。目前官方文档和Demo都只有web端的,虽然写得简单易懂,整体用法也和安卓端相同,但是具体细节还是有巨大的差异。

    当然,仔细找Google和Github上还是能找到一些不错的教程,我这里将它们结合一下,组成一个完整的入门教程,并带有可运行Demo.

    首先介绍一些基本概念
    RTC(Real Time Communication): 实时通信
    WebRTC: 基于web的实时通信
    Signaling: 信令, 一些描述媒体或网络的字符串
    SDP(Session Description Protocol): 会话描述协议, 主要描述媒体信息
    ICE(Interactive Connectivity Establishment): 交互式连接建立
    STUN(Session Traversal Utilities for NAT): NAT会话穿透工具
    TURN(Traversal Using Relays around NAT): 中继穿透NAT

    接下来是使用方法

    添加WebRTC库

    在module的build.gradle中添加依赖,这个是官方打包的最新版本(201901)。当然你也可以 自己构建.

    dependencies {
        ...
        implementation 'org.webrtc:google-webrtc:1.0.26131'
    }
    
    

    添加权限

        <uses-permission android:name="android.permission.CAMERA" />
        <uses-permission android:name="android.permission.RECORD_AUDIO" />
    

    注意Android6.0以上需要到设置里开启相机和麦克风权限。因为跟主题无关, 所以这里没加申请权限代码.

    添加SurfaceViewRenderer

    它是SurfaceView的子类

    <?xml version="1.0" encoding="utf-8"?>
    <android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
        xmlns:app="http://schemas.android.com/apk/res-auto"
        xmlns:tools="http://schemas.android.com/tools"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        tools:context="cc.rome753.wat.MainActivity">
    
        <org.webrtc.SurfaceViewRenderer
            android:id="@+id/localView"
            android:layout_width="match_parent"
            android:layout_height="match_parent"
            app:layout_constraintBottom_toBottomOf="parent"
            app:layout_constraintLeft_toLeftOf="parent"
            app:layout_constraintRight_toRightOf="parent"
            app:layout_constraintTop_toTopOf="parent" />
    
    </android.support.constraint.ConstraintLayout>
    

    使用相机

    主要步骤如下

    1. 创建PeerConnectionFactory
    2. 创建并启动VideoCapturer
    3. 用PeerConnectionFactory创建VideoSource
    4. 用PeerConnectionFactory和VideoSource创建VideoTrack
    5. 初始化视频控件SurfaceViewRenderer
    6. 将VideoTrack展示到SurfaceViewRenderer中
    public class MainActivity extends AppCompatActivity {
    
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
            setContentView(R.layout.activity_main);
    
            // create PeerConnectionFactory
            PeerConnectionFactory.InitializationOptions initializationOptions =
                    PeerConnectionFactory.InitializationOptions.builder(this).createInitializationOptions();
            PeerConnectionFactory.initialize(initializationOptions);
            PeerConnectionFactory peerConnectionFactory = PeerConnectionFactory.builder().createPeerConnectionFactory();
    
            // create AudioSource
            AudioSource audioSource = peerConnectionFactory.createAudioSource(new MediaConstraints());
            AudioTrack audioTrack = peerConnectionFactory.createAudioTrack("101", audioSource);
    
            EglBase.Context eglBaseContext = EglBase.create().getEglBaseContext();
    
            SurfaceTextureHelper surfaceTextureHelper = SurfaceTextureHelper.create("CaptureThread", eglBaseContext);
            // create VideoCapturer
            VideoCapturer videoCapturer = createCameraCapturer();
            VideoSource videoSource = peerConnectionFactory.createVideoSource(videoCapturer.isScreencast());
            videoCapturer.initialize(surfaceTextureHelper, getApplicationContext(), videoSource.getCapturerObserver());
            videoCapturer.startCapture(480, 640, 30);
    
            SurfaceViewRenderer localView = findViewById(R.id.localView);
            localView.setMirror(true);
            localView.init(eglBaseContext, null);
    
            // create VideoTrack
            VideoTrack videoTrack = peerConnectionFactory.createVideoTrack("101", videoSource);
            // display in localView
            videoTrack.addSink(localView);
        }
    
        private VideoCapturer createCameraCapturer() {
            Camera1Enumerator enumerator = new Camera1Enumerator(false);
            final String[] deviceNames = enumerator.getDeviceNames();
    
            // First, try to find front facing camera
            for (String deviceName : deviceNames) {
                if (enumerator.isFrontFacing(deviceName)) {
                    VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);
    
                    if (videoCapturer != null) {
                        return videoCapturer;
                    }
                }
            }
    
            // Front facing camera not found, try something else
            for (String deviceName : deviceNames) {
                if (!enumerator.isFrontFacing(deviceName)) {
                    VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);
    
                    if (videoCapturer != null) {
                        return videoCapturer;
                    }
                }
            }
    
            return null;
        }
    
    }
    

    附录

    https://webrtc.org/start/
    https://codelabs.developers.google.com/codelabs/webrtc-web/#0
    https://developer.mozilla.org/en-US/docs/Web/API/MediaStream
    https://vivekc.xyz/getting-started-with-webrtc-for-android-daab1e268ff4
    https://www.html5rocks.com/en/tutorials/webrtc/basics/

    本项目GitHub地址/step1camera

    下一篇: Android WebRTC完整入门教程02: 本地回环

    相关文章

      网友评论

        本文标题:Android WebRTC完整入门教程01: 使用相机

        本文链接:https://www.haomeiwen.com/subject/dqdxyqtx.html