论文标题
在6G网络中,联合的深入增强学习用于开放式切片
Federated Deep Reinforcement Learning for Open RAN Slicing in 6G Networks
论文作者
论文摘要
无线电访问网络(RAN)切片是启用当前5G网络和下一代网络以满足各个垂直服务中不同服务的要求的关键要素。但是,这些服务要求的异质性质以及有限的资源,使得切片非常复杂。确实,移动虚拟网络运营商(MVNOS)面临的挑战是快速使其跑步策略适应环境限制和服务要求的频繁更改。机器学习技术(例如深钢筋学习(DRL))越来越被认为是自动化式切片操作的管理和编排的关键推动力。 Nerveless,由于对训练的环境数据的强烈依赖,因此将DRL模型推广到多个切片环境的能力可能受到限制。联合学习使MVNO能够利用更多的DRL培训投入,而无需从不同的RANS收集这些数据的高成本。在本文中,我们提出了一种联合的深入强化学习方法,以进行切片。在这种方法中,MVNOS合作,以提高基于DRL的跑步模型的性能。每个MVNO都会训练DRL模型并将其发送以进行聚合。然后将汇总模型发送回每个MVNO,以立即使用并进一步培训。仿真结果显示了拟议的DRL方法的有效性。
Radio access network (RAN) slicing is a key element in enabling current 5G networks and next-generation networks to meet the requirements of different services in various verticals. However, the heterogeneous nature of these services' requirements, along with the limited RAN resources, makes the RAN slicing very complex. Indeed, the challenge that mobile virtual network operators (MVNOs) face is to rapidly adapt their RAN slicing strategies to the frequent changes of the environment constraints and service requirements. Machine learning techniques, such as deep reinforcement learning (DRL), are increasingly considered a key enabler for automating the management and orchestration of RAN slicing operations. Nerveless, the ability to generalize DRL models to multiple RAN slicing environments may be limited, due to their strong dependence on the environment data on which they are trained. Federated learning enables MVNOs to leverage more diverse training inputs for DRL without the high cost of collecting this data from different RANs. In this paper, we propose a federated deep reinforcement learning approach for RAN slicing. In this approach, MVNOs collaborate to improve the performance of their DRL-based RAN slicing models. Each MVNO trains a DRL model and sends it for aggregation. The aggregated model is then sent back to each MVNO for immediate use and further training. The simulation results show the effectiveness of the proposed DRL approach.