论文标题

360度虚拟旅游视频的基于字幕的视口预测

Subtitle-based Viewport Prediction for 360-degree Virtual Tourism Video

论文作者

Jing, Chuanzhe, Duc, Tho Nguyen, Tan, Phan Xuan, Kamioka, Eiji

论文摘要

360度流视频可以为用户提供丰富的沉浸式体验。但是,它需要一个极高的带宽网络。节省带宽消耗的常见解决方案之一是仅流式传输用户视口涵盖的一部分视频。为此,用户的观点预测是必不可少的。在现有的视口预测方法中,它们主要集中于用户的头部运动轨迹和视频显着性。他们都没有考虑视频中包含的导航信息,这可以将用户的注意力转移到视频中的特定区域。这些信息可以包含在视频字幕中,尤其是360度虚拟旅游视频中的一张。这一事实揭示了视频字幕对视口预测的潜在贡献。因此,在本文中,提出了一个基于字幕的观看港预测模型,用于360度虚拟旅游视频。该模型除了头部运动轨迹和视频显着性外,还利用视频字幕中的导航信息,以提高预测准确性。实验结果表明,所提出的模型优于仅使用头部运动轨迹和视频显着性进行视口预测的基线方法。

360-degree streaming videos can provide a rich immersive experiences to the users. However, it requires an extremely high bandwidth network. One of the common solutions for saving bandwidth consumption is to stream only a portion of video covered by the user's viewport. To do that, the user's viewpoint prediction is indispensable. In existing viewport prediction methods, they mainly concentrate on the user's head movement trajectory and video saliency. None of them consider navigation information contained in the video, which can turn the attention of the user to specific regions in the video with high probability. Such information can be included in video subtitles, especially the one in 360-degree virtual tourism videos. This fact reveals the potential contribution of video subtitles to viewport prediction. Therefore, in this paper, a subtitle-based viewport prediction model for 360-degree virtual tourism videos is proposed. This model leverages the navigation information in the video subtitles in addition to head movement trajectory and video saliency, to improve the prediction accuracy. The experimental results demonstrate that the proposed model outperforms baseline methods which only use head movement trajectory and video saliency for viewport prediction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源