美文网首页
The Conversation: Deep Audio-Vis

The Conversation: Deep Audio-Vis

作者: Woooooooooooooo | 来源:发表于2019-10-08 09:32 被阅读0次

    2018.6 牛津大学

    利用嘴唇图像辅助声音分离

    论文地址:https://arxiv.org/pdf/1804.04121.pdf

    项目地址:(测试demo)http://www.robots. ox.ac.uk/˜vgg/demo/theconversation.


    Abstract

    Our goal is to isolate individual speakers from multi-talker simultaneous speech in videos. Existing works in this area have focussed on trying to separate utterances from known speakers in controlled environments. In this paper, we propose a deep audio-visual speech enhancement network that is able to separate a speaker’s voice given lip regions in the corresponding video, by predicting both the magnitude and the phase of the target signal. The method is applicable to speakers unheard and unseen during training, and for unconstrained environments. We demonstrate strong quantitative and qualitative results, isolating extremely challenging real-world examples.

    我们的目标是将个别发言者与视频中的多人讲话同声传译隔离开来。 该领域的现有工作集中于试图在受控环境中将话语与已知扬声器分开。 在本文中,我们提出了一种深度视听语音增强网络,它能够通过预测目标信号的幅度和相位来分离相应视频中给定唇区的说话者的语音。 该方法适用于在训练期间闻所未闻且不受限制的说话者,以及无约束环境。 我们展示了强大的定量和定性结果,孤立了极具挑战性的现实世界的例子。

    1. Introduction

    In the film The Conversation (dir. Francis Ford Coppola, 1974), the protagonist, played by Gene Hackman, goes to inordinate lengths to record a couple’s converservation in a crowded city square. Despite many ingenious placements of microphones, he did not use the lip motion of the speakers to suppress speech from others nearby. In this paper we propose a new model for this task of audio-visual speech enhancement, that he could have used.

    在电影The Conversation(导演弗朗西斯·福特·科波拉,1974年)中,由Gene Hackman扮演的主角,在一个拥挤的城市广场上记录了一对夫妇的谈话。 尽管有许多巧妙的麦克风放置,但他没有使用说话者的唇部动作来抑制附近其他人的言语。 在本文中,我们提出了一个他可以使用的视听语音增强任务的新模型。

    More generally, we propose an audio-visual neural network that can isolate a speaker’s voice from others, using visual information from the target speaker’s lips: Given a noisy audio signal and the corresponding speaker video, we produce an enhanced audio signal containing only the target speaker’s voice with the rest of the speakers and background noise suppressed.

    更一般地,我们提出了一种视听神经网络,可以使用来自目标说话者嘴唇的视觉信息将说话者的声音与其他人隔离:给定嘈杂的音频信号和相应的扬声器视频,我们产生仅包含目标的增强音频信号 扬声器的声音与其他扬声器和背景噪声受到抑制。

    Rather than synthesising the voice from scratch, which would be a challenging task, we instead predict a mask that filters the noisy spectrogram of the input. Many speech enhancement approaches focus on refining only the magnitude of the noisy input signal and use the noisy phase for the signal reconstruction. This works well for high signal-to-noise-ratio scenarios, but as the SNR decreases, the noisy phase becomes a bad approximation of the ground truth one [1]. Instead, we propose correction modules for both the magnitude and phase. The architecture is summarised in Figure 1. In training, we initialize the visual stream with a network pre-trained on a word-level lipreading task, but after this, we train from unlabelled data (Section 3.1) where no explicit annotation is required at the word, character or phoneme-level.

    我们不是从头开始合成语音,而是一项具有挑战性的任务,而是预测一个过滤输入噪声频谱图的mask许多语音增强方法专注于仅改善噪声输入信号的幅度并使用噪声相位进行信号重建。 这适用于高信噪比情况,但随着信噪比的降低,噪声相位变得非常接近gt[1]。 相反,我们提出了幅度和相位的校正模块。 图1总结了该体系结构。在培训中,我们使用预先训练过单词级唇读任务的网络初始化视觉流,但在此之后,我们从未标记数据(第3.1节)进行训练,其中不需要明确注释。 单词,字符或音素级别。

    注意这里提到的mask,理解上应该和图像分割的mask有所区分。图像分割经过nms后为每个像素赋一个固定的类别,而这里的mask可以是ratio的mask

    相关文章

      网友评论

          本文标题:The Conversation: Deep Audio-Vis

          本文链接:https://www.haomeiwen.com/subject/inxqyctx.html