论文标题
无线深视频语义传输
Wireless Deep Video Semantic Transmission
论文作者
论文摘要
在本文中,我们设计了一种新的高效深度源通道编码方法,以实现通过无线通道的端到端视频传输。所提出的方法利用非线性变换和条件编码体系结构来适应视频帧的语义特征,并通过深度关节源通道编码通过无线通道传输语义特征域表示。我们的框架是以“深视频语义传输”名称(DVST)收集的。特别是,从特征域上下文提供的强大时间先验中受益,学到的非线性变换函数在时间上具有适应性,从而导致更丰富,更准确的熵模型指导当前帧的传输。因此,开发了一种新型的自适应传输机制,以定制视频源的深关节源通道编码。它学会了在视频帧内和之间分配有限的频道带宽,以最大程度地提高整体传输性能。整个DVST设计被认为是一个优化问题,其目标是最大程度地减少感知质量指标或机器视觉任务性能指标下的端到端传输率延伸性能。在标准的视频源测试序列和各种通信方案中,实验表明我们的DVST通常可以超过传统的无线视频编码传输方案。拟议的DVST框架可以很好地支持未来的语义通信,因为其视频感知和机器视觉任务集成能力。
In this paper, we design a new class of high-efficiency deep joint source-channel coding methods to achieve end-to-end video transmission over wireless channels. The proposed methods exploit nonlinear transform and conditional coding architecture to adaptively extract semantic features across video frames, and transmit semantic feature domain representations over wireless channels via deep joint source-channel coding. Our framework is collected under the name deep video semantic transmission (DVST). In particular, benefiting from the strong temporal prior provided by the feature domain context, the learned nonlinear transform function becomes temporally adaptive, resulting in a richer and more accurate entropy model guiding the transmission of current frame. Accordingly, a novel rate adaptive transmission mechanism is developed to customize deep joint source-channel coding for video sources. It learns to allocate the limited channel bandwidth within and among video frames to maximize the overall transmission performance. The whole DVST design is formulated as an optimization problem whose goal is to minimize the end-to-end transmission rate-distortion performance under perceptual quality metrics or machine vision task performance metrics. Across standard video source test sequences and various communication scenarios, experiments show that our DVST can generally surpass traditional wireless video coded transmission schemes. The proposed DVST framework can well support future semantic communications due to its video content-aware and machine vision task integration abilities.