论文标题

移动边缘计算的知识蒸馏卸载

Knowledge Distillation for Mobile Edge Computation Offloading

论文作者

Chen, Haowei, Zeng, Liekang, Yu, Shuai, Chen, Xu

论文摘要

边缘计算卸载允许移动最终设备将计算密集任务执行在边缘服务器上。 End设备可以根据当前网络条件和设备的配置文件来决定将任务卸载以使服务器,云服务器的边缘服务器,云服务器或本地执行。在本文中,我们提出了基于深入模仿学习(DIL)和知识蒸馏(KD)的边缘计算卸载框架,该框架有助于最终设备快速做出细粒度的决策,以优化在线计算任务的延迟。我们将计算卸载问题形式化为多标签分类问题。我们的DIL模型的培训样本是以离线方式生成的。训练模型后,我们利用知识蒸馏来获得轻质DIL模型,通过该模型,我们进一步减少了模型的推断延迟。数值实验表明,我们的模型做出的卸载决策优于其他相关政策在延迟度量标准中制定的决策。同样,我们的模型在所有政策之间的推断延迟最短。

Edge computation offloading allows mobile end devices to put execution of compute-intensive task on the edge servers. End devices can decide whether offload the tasks to edge servers, cloud servers or execute locally according to current network condition and devices' profile in an online manner. In this article, we propose an edge computation offloading framework based on Deep Imitation Learning (DIL) and Knowledge Distillation (KD), which assists end devices to quickly make fine-grained decisions to optimize the delay of computation tasks online. We formalize computation offloading problem into a multi-label classification problem. Training samples for our DIL model are generated in an offline manner. After model is trained, we leverage knowledge distillation to obtain a lightweight DIL model, by which we further reduce the model's inference delay. Numerical experiment shows that the offloading decisions made by our model outperforms those made by other related policies in latency metric. Also, our model has the shortest inference delay among all policies.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源