论文标题
在多种情况下,基于多任务学习的CSI反馈设计
Multi-task Learning-based CSI Feedback Design in Multiple Scenarios
论文作者
论文摘要
对于频分式双工系统,基本的下行链路通道状态信息(CSI)反馈包括压缩,反馈,减压和重建的链接,以减少反馈开销。一种有效的CSI反馈方法是基于深度学习的自动编码器(AE)结构,但在实际部署中面临问题,例如在具有多个复杂方案的单元格中选择部署模式。与其设计具有巨大复杂性的AE网络来处理所有方案的CSI,不如说是更现实的模式,就是按区域/场景将CSI数据集划分,并使用多个相对简单的AE网络来处理子区域的CSI。但是,两者都需要高度的用户设备(UE)内存能力,并且不适用于低级设备。在本文中,我们根据后者的多任务模式提出了一个新的用户友好设计的框架。通过多任务学习,我们的框架,单编码器到二次编码器(S-to-M)将多个独立的AES设计到联合体系结构中:共享编码器对应于多个特定于任务的解码器。我们还以Gatenet作为分类器来完成我们的框架,以启用基站自主选择与子区域相对应的正确特定任务的解码器。模拟多幕科CSI数据集进行的实验证明了我们提出的S-to-M的优势,而不是其他基准模式,即显着降低了模型的复杂性和UE的内存消耗量
For frequency division duplex systems, the essential downlink channel state information (CSI) feedback includes the links of compression, feedback, decompression and reconstruction to reduce the feedback overhead. One efficient CSI feedback method is the Auto-Encoder (AE) structure based on deep learning, yet facing problems in actual deployments, such as selecting the deployment mode when deploying in a cell with multiple complex scenarios. Rather than designing an AE network with huge complexity to deal with CSI of all scenarios, a more realistic mode is to divide the CSI dataset by region/scenario and use multiple relatively simple AE networks to handle subregions' CSI. However, both require high memory capacity for user equipment (UE) and are not suitable for low-level devices. In this paper, we propose a new user-friendly-designed framework based on the latter multi-tasking mode. Via Multi-Task Learning, our framework, Single-encoder-to-Multiple-decoders (S-to-M), designs the multiple independent AEs into a joint architecture: a shared encoder corresponds to multiple task-specific decoders. We also complete our framework with GateNet as a classifier to enable the base station autonomously select the right task-specific decoder corresponding to the subregion. Experiments on the simulating multi-scenario CSI dataset demonstrate our proposed S-to-M's advantages over the other benchmark modes, i.e., significantly reducing the model complexity and the UE's memory consumption