论文标题
使用场景图学习音频源图,以进行音频源分离
Learning Audio-Visual Dynamics Using Scene Graphs for Audio Source Separation
论文作者
论文摘要
静态源产生的声音与移动源产生的声音之间存在明确的区别,尤其是当源向麦克风移动或远离麦克风时。在本文中,我们建议使用音频和视觉动力学之间的这种连接同时解决两个具有挑战性的任务,即:(i)使用视觉提示将音频源与混合物分开,(ii)使用其分离的音频预测声音源的3D视觉运动。为此,我们介绍了音频分离器和运动预测器(ASMP) - 一个深入的学习框架,利用场景的3D结构以及声源的运动来获得更好的音频源分离。 ASMP的核心是一个2.5D场景图,该图捕获了视频中的各种对象及其伪-3D空间接近。该图是通过将2D视频帧的2.5D单眼深度预测一起注册,并将2.5D场景区域与应用于这些帧应用的对象检测器的输出相关联。然后将ASMP任务用于数学建模为:(i)递归将2.5D场景图分割为几个子图形,每个图形都与输入音频混合物中的构成声音相关联(然后分离),(ii)预测与分离的音频的相应声源的3D Motions。为了通过经验评估ASMP,我们介绍了两个具有挑战性的视听数据集的实验,即。野外(ASIW)和视觉事件(AVE)中的音频分离。我们的结果表明,ASMP在源分离质量方面取得了明显的改善,在两个数据集上的表现都优于先前的工作,同时也比其他方法更好地估计声源的运动方向。
There exists an unequivocal distinction between the sound produced by a static source and that produced by a moving one, especially when the source moves towards or away from the microphone. In this paper, we propose to use this connection between audio and visual dynamics for solving two challenging tasks simultaneously, namely: (i) separating audio sources from a mixture using visual cues, and (ii) predicting the 3D visual motion of a sounding source using its separated audio. Towards this end, we present Audio Separator and Motion Predictor (ASMP) -- a deep learning framework that leverages the 3D structure of the scene and the motion of sound sources for better audio source separation. At the heart of ASMP is a 2.5D scene graph capturing various objects in the video and their pseudo-3D spatial proximities. This graph is constructed by registering together 2.5D monocular depth predictions from the 2D video frames and associating the 2.5D scene regions with the outputs of an object detector applied on those frames. The ASMP task is then mathematically modeled as the joint problem of: (i) recursively segmenting the 2.5D scene graph into several sub-graphs, each associated with a constituent sound in the input audio mixture (which is then separated) and (ii) predicting the 3D motions of the corresponding sound sources from the separated audio. To empirically evaluate ASMP, we present experiments on two challenging audio-visual datasets, viz. Audio Separation in the Wild (ASIW) and Audio Visual Event (AVE). Our results demonstrate that ASMP achieves a clear improvement in source separation quality, outperforming prior works on both datasets, while also estimating the direction of motion of the sound sources better than other methods.