论文标题

在O-Ran的交通转向的联合元学习

Federated Meta-Learning for Traffic Steering in O-RAN

论文作者

Erdol, Hakan, Wang, Xiaoyang, Li, Peizheng, Thomas, Jonathan D., Piechocki, Robert, Oikonomou, George, Inacio, Rui, Ahmad, Abdelrahim, Briggs, Keith, Kapoor, Shipra

论文摘要

与LTE网络相比,5G的愿景在于提供较高的数据速率,低延迟(为了实现近实时应用程序),显着提高基站容量以及用户的接近完美服务质量(QoS)。为了提供此类服务,5G系统将支持LTE,NR,NR-U和Wi-Fi等访问技术的各种组合。每种无线电访问技术(RAT)都提供不同类型的访问,这些访问应在用户中最佳地分配和管理。除资源管理外,5G系统还将支持双重连接服务。因此,网络的编排对于系统经理在旧版访问技术方面来说是一个更困难的问题。在本文中,我们提出了一种基于联合元学习(FML)的大鼠分配的算法,该算法使RAN智能控制器(RIC)能够更快地适应动态变化的环境。我们设计了一个模拟环境,其中包含LTE和5G NR服务技术。在模拟中,我们的目标是在传输的截止日期内满足UE要求,以提供更高的QoS值。我们将提出的算法与单个RL代理,爬行动物算法和基于规则的启发式方法进行了比较。仿真结果表明,提出的FML方法分别在第一部部署回合21%和12%时达到了较高的缓存率。此外,在比较方法中,建议的方法最快地适应了新任务和环境。

The vision of 5G lies in providing high data rates, low latency (for the aim of near-real-time applications), significantly increased base station capacity, and near-perfect quality of service (QoS) for users, compared to LTE networks. In order to provide such services, 5G systems will support various combinations of access technologies such as LTE, NR, NR-U and Wi-Fi. Each radio access technology (RAT) provides different types of access, and these should be allocated and managed optimally among the users. Besides resource management, 5G systems will also support a dual connectivity service. The orchestration of the network therefore becomes a more difficult problem for system managers with respect to legacy access technologies. In this paper, we propose an algorithm for RAT allocation based on federated meta-learning (FML), which enables RAN intelligent controllers (RICs) to adapt more quickly to dynamically changing environments. We have designed a simulation environment which contains LTE and 5G NR service technologies. In the simulation, our objective is to fulfil UE demands within the deadline of transmission to provide higher QoS values. We compared our proposed algorithm with a single RL agent, the Reptile algorithm and a rule-based heuristic method. Simulation results show that the proposed FML method achieves higher caching rates at first deployment round 21% and 12% respectively. Moreover, proposed approach adapts to new tasks and environments most quickly amongst the compared methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源