论文标题
一种可靠性意识的多军强盗方法,以学习和选择需求响应中的用户
A Reliability-aware Multi-armed Bandit Approach to Learn and Select Users in Demand Response
论文作者
论文摘要
社会系统优化和控制的一个挑战是处理未知和不确定的用户行为。本文着重于住宅需求响应(DR),并提出了一个闭环学习计划来解决这些问题。特别是,我们考虑了DR计划,聚合程序呼吁住宅用户更改其需求,以便总负载调整接近目标价值。要学习和选择合适的用户,我们将DR问题提出为组合多臂强盗(CMAB)问题,具有可靠性目标。我们提出了一种学习算法:CUCB-AVG(组合上限置信度平均水平),该算法利用上限置信度界限和样本平均值来平衡探索(学习)和开发(选择)之间的权衡。我们考虑固定的时间不变目标和时变目标,并表明CUCB-AVG达到$ O(\ log t)$和$ O(\ sqrt {t) \ log(t)})$分别遗憾。最后,我们使用合成和真实数据在数值上测试算法,并证明我们的CUCB-AVG的性能明显优于经典CUCB,并且比汤普森采样更好。
One challenge in the optimization and control of societal systems is to handle the unknown and uncertain user behavior. This paper focuses on residential demand response (DR) and proposes a closed-loop learning scheme to address these issues. In particular, we consider DR programs where an aggregator calls upon residential users to change their demand so that the total load adjustment is close to a target value. To learn and select the right users, we formulate the DR problem as a combinatorial multi-armed bandit (CMAB) problem with a reliability objective. We propose a learning algorithm: CUCB-Avg (Combinatorial Upper Confidence Bound-Average), which utilizes both upper confidence bounds and sample averages to balance the tradeoff between exploration (learning) and exploitation (selecting). We consider both a fixed time-invariant target and time-varying targets, and show that CUCB-Avg achieves $O(\log T)$ and $O(\sqrt{T \log(T)})$ regrets respectively. Finally, we numerically test our algorithms using synthetic and real data, and demonstrate that our CUCB-Avg performs significantly better than the classic CUCB and also better than Thompson Sampling.