论文标题
使用基于CNN的Fusion扩展了360度图像中头部运动预测的2D显着模型
Extending 2D Saliency Models for Head Movement Prediction in 360-degree Images using CNN-based Fusion
论文作者
论文摘要
显着性预测对于360度图像/视频应用程序可能有很大的好处,包括压缩,流,渲染和观点指导。因此,适应360度图像的2D显着性预测方法是很自然的。为了实现这一目标,有必要将360度图像投影到2D平面。但是,现有的投影技术引入了不同的扭曲,从而提供了差的结果,并使2D显着性预测模型直接应用于360度含量。因此,在本文中,我们提出了一个新的框架,以有效地将任何2D显着性预测方法应用于360度图像。提出的框架特别包括一种基于卷积神经网络的新型融合方法,该方法在避免引入失真的同时提供了更准确的显着性预测。提出的框架已通过五种2D显着性预测方法进行了评估,实验结果表明,与使用加权总和或像素的最大融合方法相比,我们的方法的优越性。
Saliency prediction can be of great benefit for 360-degree image/video applications, including compression, streaming , rendering and viewpoint guidance. It is therefore quite natural to adapt the 2D saliency prediction methods for 360-degree images. To achieve this, it is necessary to project the 360-degree image to 2D plane. However, the existing projection techniques introduce different distortions, which provides poor results and makes inefficient the direct application of 2D saliency prediction models to 360-degree content. Consequently, in this paper, we propose a new framework for effectively applying any 2D saliency prediction method to 360-degree images. The proposed framework particularly includes a novel convolutional neural network based fusion approach that provides more accurate saliency prediction while avoiding the introduction of distortions. The proposed framework has been evaluated with five 2D saliency prediction methods, and the experimental results showed the superiority of our approach compared to the use of weighted sum or pixel-wise maximum fusion methods.