论文标题

通过神经切线内核可行,使观看活跃的学习策略可行

Making Look-Ahead Active Learning Strategies Feasible with Neural Tangent Kernels

论文作者

Mohamadi, Mohamad Amin, Bae, Wonho, Sutherland, Danica J.

论文摘要

我们提出了一种新的方法,用于近似基于假设标记的候选数据点进行重新培训的主动学习获取策略。尽管这通常与深层网络不可行,但我们使用神经切线内核来近似重新进行重新培训的结果,并证明即使在主动学习设置中,这种近似值也无效 - 近似“ look-apear-aphead”选择标准,所需的计算要少得多。这也使我们能够进行顺序的主动学习,即在流态中更新模型,而无需在添加每个新数据点后使用SGD重新训练模型。此外,我们的查询策略可以更好地理解模型的预测将如何通过与标准(“近视”)标准相比,通过较大的利润来击败其他浏览策略,并与其他基于池基于池中的基于池中的基于池中的基础数据集中的先进方法相比,击败其他浏览策略。

We propose a new method for approximating active learning acquisition strategies that are based on retraining with hypothetically-labeled candidate data points. Although this is usually infeasible with deep networks, we use the neural tangent kernel to approximate the result of retraining, and prove that this approximation works asymptotically even in an active learning setup -- approximating "look-ahead" selection criteria with far less computation required. This also enables us to conduct sequential active learning, i.e. updating the model in a streaming regime, without needing to retrain the model with SGD after adding each new data point. Moreover, our querying strategy, which better understands how the model's predictions will change by adding new data points in comparison to the standard ("myopic") criteria, beats other look-ahead strategies by large margins, and achieves equal or better performance compared to state-of-the-art methods on several benchmark datasets in pool-based active learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源