论文标题

高维半监督学习的最佳和安全估计

Optimal and Safe Estimation for High-Dimensional Semi-Supervised Learning

论文作者

Deng, Siyi, Ning, Yang, Zhao, Jiwei, Zhang, Heping

论文摘要

我们考虑高维半监督学习中的估计问题。我们的目标是研究何时以及如何利用未标记的数据来改善线性模型的回归参数的估计,这是因为在数据分析中可能会误解了这样的线性模型。我们首先在半监督设置中建立了最小值下限,以进行参数估计,并证明仅使用标记的数据可以通过监督估计器来实现此下限。我们提出了一个最佳的半监督估计量,只要可以以适当的速率对条件均值函数进行始终如一的估计,可以实现此下限并改善监督估计器。我们进一步提出了一个安全的半监督估计器。我们认为它是安全的,因为该估计器始终与监督估计器一样好。我们还将我们的想法扩展到由条件平均函数不同的错误定义引起的多个半监督估计量的聚合。进行了广泛的数值模拟和实际数据分析,以说明我们的理论结果。

We consider the estimation problem in high-dimensional semi-supervised learning. Our goal is to investigate when and how the unlabeled data can be exploited to improve the estimation of the regression parameters of linear model in light of the fact that such linear models may be misspecified in data analysis. We first establish the minimax lower bound for parameter estimation in the semi-supervised setting, and show that this lower bound cannot be achieved by supervised estimators using the labeled data only. We propose an optimal semi-supervised estimator that can attain this lower bound and therefore improves the supervised estimators, provided that the conditional mean function can be consistently estimated with a proper rate. We further propose a safe semi-supervised estimator. We view it safe, because this estimator is always at least as good as the supervised estimators. We also extend our idea to the aggregation of multiple semi-supervised estimators caused by different misspecifications of the conditional mean function. Extensive numerical simulations and a real data analysis are conducted to illustrate our theoretical results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源