论文标题

分布式机器学习的联合核心结构和量化

Joint Coreset Construction and Quantization for Distributed Machine Learning

论文作者

Lu, Hanlin, Liu, Changchang, Wang, Shiqiang, He, Ting, Narayanan, Vijay, Chan, Kevin S., Pasteris, Stephen

论文摘要

核心是较大的较大数据集的较小的加权摘要,旨在为机器学习(ML)任务提供可证明的错误范围,同时大大降低了通信和计算成本。为了在ML误差范围和成本之间进行更好的权衡,我们提出了第一个框架,将量化技术纳入核心结构过程。具体而言,我们理论上分析了由核心结构和量化的组合引起的ML误差界。基于此,我们制定了一个优化问题,以最大程度地减少固定通信成本预算下的ML错误。为了提高大型数据集的可伸缩性,我们确定了原始目标函数的两个代理,为此开发了有效的算法。对于多个节点的数据,我们进一步设计了一种新型算法,以将通信预算分配给节点,同时最大程度地减少ML误差。通过对多个现实世界数据集的广泛实验,我们证明了我们提出的各种ML任务的算法的有效性和效率。特别是,在大多数情况下,我们的算法已实现了超过90%的数据降低,而ML性能降解的降解小于10%。

Coresets are small, weighted summaries of larger datasets, aiming at providing provable error bounds for machine learning (ML) tasks while significantly reducing the communication and computation costs. To achieve a better trade-off between ML error bounds and costs, we propose the first framework to incorporate quantization techniques into the process of coreset construction. Specifically, we theoretically analyze the ML error bounds caused by a combination of coreset construction and quantization. Based on that, we formulate an optimization problem to minimize the ML error under a fixed budget of communication cost. To improve the scalability for large datasets, we identify two proxies of the original objective function, for which efficient algorithms are developed. For the case of data on multiple nodes, we further design a novel algorithm to allocate the communication budget to the nodes while minimizing the overall ML error. Through extensive experiments on multiple real-world datasets, we demonstrate the effectiveness and efficiency of our proposed algorithms for a variety of ML tasks. In particular, our algorithms have achieved more than 90% data reduction with less than 10% degradation in ML performance in most cases.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源