当使用ios版webrtc objective-c的api时,我们使用RTCEAGLVideoView来显示远端图像
@interface RTCEAGLVideoView : UIView <RTCVideoRenderer>
@property(nonatomic, weak) id<RTCEAGLVideoViewDelegate> delegate;
@end
我们可以实现并设置RTCEAGLVideoViewDelegate
@protocol RTCEAGLVideoViewDelegate
- (void)videoView:(RTCEAGLVideoView *)videoView didChangeVideoSize:(CGSize)size;
@end
这样当远端或者本地的图像尺寸发生改变时,会就有回调产生。
这样我们就可以在尺寸变化后调整view的布局,避免造成图像拉伸。
产生这个回调的流程是:
1、webrtc解码图像后,需要渲染,会调用VideoRendererAdapter的以下代码
class VideoRendererAdapter
: public rtc::VideoSinkInterface<cricket::VideoFrame> {
public:
VideoRendererAdapter(RTCVideoRendererAdapter* adapter) {
adapter_ = adapter;
size_ = CGSizeZero;
}
void OnFrame(const cricket::VideoFrame& nativeVideoFrame) override {
RTCVideoFrame* videoFrame = [[RTCVideoFrame alloc]
initWithVideoBuffer:nativeVideoFrame.video_frame_buffer()
rotation:nativeVideoFrame.rotation()
timeStampNs:nativeVideoFrame.timestamp_us() *
rtc::kNumNanosecsPerMicrosec];
CGSize current_size = (videoFrame.rotation % 180 == 0)
? CGSizeMake(videoFrame.width, videoFrame.height)
: CGSizeMake(videoFrame.height, videoFrame.width);
if (!CGSizeEqualToSize(size_, current_size)) {
size_ = current_size;
[adapter_.videoRenderer setSize:size_];
}
[adapter_.videoRenderer renderFrame:videoFrame];
}
private:
__weak RTCVideoRendererAdapter *adapter_;
CGSize size_;
};
}
2、然后会调用RTCEAGLVideoView中的setSize
- (void)setSize:(CGSize)size {
__weak RTCEAGLVideoView *weakSelf = self;
dispatch_async(dispatch_get_main_queue(), ^{
RTCEAGLVideoView *strongSelf = weakSelf;
[strongSelf.delegate videoView:strongSelf didChangeVideoSize:size];
});
}
3、setSize中就会调用RTCEAGLVideoViewDelegate的didChangeVideoSize方法
网友评论