论文标题
O-Ran中动态切片管理的进化深度强化学习
Evolutionary Deep Reinforcement Learning for Dynamic Slice Management in O-RAN
论文作者
论文摘要
需要下一代无线网络以同时满足各种服务和标准。为了满足即将到来的严格条件,开发了具有柔性设计,分解的虚拟和可编程组件以及智能闭环控制等特征的新型开放式访问网络(O-RAN)。面对不断变化的情况,O-Ran切片被研究为确保网络服务质量(QoS)的关键策略。但是,必须动态控制不同的网络切片,以避免由环境快速变化引起的服务水平一致性(SLA)变化。因此,本文介绍了一个新颖的框架,能够通过智能提供的提供资源来管理网络切片。由于不同的异质环境,智能机器学习方法需要足够的探索来处理无线网络中最严厉的情况并加速收敛。为了解决这个问题,提出了一种新解决方案,基于基于进化的深度强化学习(EDRL),以加速和优化无线电访问网络(RAN)智能控制器(RIC)模块中的切片管理学习过程。为此,O-RAN切片被表示为Markov决策过程(MDP),然后将其最佳地解决用于资源分配,以使用EDRL方法满足服务需求。在达到服务需求方面,模拟结果表明,所提出的方法的表现优于DRL基线62.2%。
The next-generation wireless networks are required to satisfy a variety of services and criteria concurrently. To address upcoming strict criteria, a new open radio access network (O-RAN) with distinguishing features such as flexible design, disaggregated virtual and programmable components, and intelligent closed-loop control was developed. O-RAN slicing is being investigated as a critical strategy for ensuring network quality of service (QoS) in the face of changing circumstances. However, distinct network slices must be dynamically controlled to avoid service level agreement (SLA) variation caused by rapid changes in the environment. Therefore, this paper introduces a novel framework able to manage the network slices through provisioned resources intelligently. Due to diverse heterogeneous environments, intelligent machine learning approaches require sufficient exploration to handle the harshest situations in a wireless network and accelerate convergence. To solve this problem, a new solution is proposed based on evolutionary-based deep reinforcement learning (EDRL) to accelerate and optimize the slice management learning process in the radio access network's (RAN) intelligent controller (RIC) modules. To this end, the O-RAN slicing is represented as a Markov decision process (MDP) which is then solved optimally for resource allocation to meet service demand using the EDRL approach. In terms of reaching service demands, simulation results show that the proposed approach outperforms the DRL baseline by 62.2%.