论文标题

对小数据学习的调查:概括,优化和挑战

A Survey of Learning on Small Data: Generalization, Optimization, and Challenge

论文作者

Cao, Xiaofeng, Bu, Weixin, Huang, Shengjun, Zhang, Minling, Tsang, Ivor W., Ong, Yew Soon, Kwok, James T.

论文摘要

学习大数据为人工智能(AI)带来了成功,但是注释和培训成本很昂贵。将来,学习近似大数据概括能力的小数据是AI的最终目的之一,它要求机器识别依靠小数据作为人类的目标和场景。一系列的学习主题正在进行这种方式,例如积极学习和少数学习。但是,其概括性能几乎没有理论保证。此外,他们的大多数设置都是被动的,也就是说,标签分布由已知分布的有限培训资源明确控制。该调查遵循PAC(可能是近似正确)框架下的不可知论主动采样理论,以分析模型不稳定的监督和无监督的方式,对小数据的概括误差和标签的复杂性。考虑到多个学习社区可能会产生小的数据表示,相关主题已经得到很好的调查,因此我们将新颖的几何形式表示小型数据的新型几何表示观点(欧几里得人和非欧几里得)(双曲线)平均值,其中优化解决方案在其中包括欧几里得梯度,非欧克尼亚人的毕业生和Stein的毕业和讨论。稍后,总结了通过对小数据进行学习可以改善的多个学习社区,从而产生了数据有效的表示,例如转移学习,对比度学习,图表表示学习。同时,我们发现元学习可能提供有效的参数更新策略,以学习小数据。然后,我们探讨了小数据的多种挑战性场景,例如弱监督和多标签。最后,调查了可能受益于有效的小数据表示的多个数据应用程序。

Learning on big data brings success for artificial intelligence (AI), but the annotation and training costs are expensive. In future, learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI, which requires machines to recognize objectives and scenarios relying on small data as humans. A series of learning topics is going on this way such as active learning and few-shot learning. However, there are few theoretical guarantees for their generalization performance. Moreover, most of their settings are passive, that is, the label distribution is explicitly controlled by finite training resources from known distributions. This survey follows the agnostic active sampling theory under a PAC (Probably Approximately Correct) framework to analyze the generalization error and label complexity of learning on small data in model-agnostic supervised and unsupervised fashion. Considering multiple learning communities could produce small data representation and related topics have been well surveyed, we thus subjoin novel geometric representation perspectives for small data: the Euclidean and non-Euclidean (hyperbolic) mean, where the optimization solutions including the Euclidean gradients, non-Euclidean gradients, and Stein gradient are presented and discussed. Later, multiple learning communities that may be improved by learning on small data are summarized, which yield data-efficient representations, such as transfer learning, contrastive learning, graph representation learning. Meanwhile, we find that the meta-learning may provide effective parameter update policies for learning on small data. Then, we explore multiple challenging scenarios for small data, such as the weak supervision and multi-label. Finally, multiple data applications that may benefit from efficient small data representation are surveyed.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源