论文标题

失衡回归中的模型优化

Model Optimization in Imbalanced Regression

论文作者

Silva, Aníbal, Ribeiro, Rita P., Moniz, Nuno

论文摘要

不平衡的域学习旨在在预测虽然不足的实例中产生准确的模型,但对于该域而言至关重要。该领域的研究主要集中在分类任务上。相比之下,在回归任务中进行的研究数量可以忽略不计。造成这种情况的主要原因之一是缺乏能够专注于最小化极端值误差的损失函数。最近,引入了评估指标:平方错误相关区域(SERA)。该度量标准更加重视在极端值下犯下的错误,同时还考虑了整个目标可变域中的性能,从而防止了严重的偏见。但是,其作为优化度量的有效性尚不清楚。在本文中,我们的目标是研究使用血清作为不平衡回归任务的优化标准的影响。使用梯度增强算法作为概念证明,我们对36个不同域和大小的数据集进行了实验研究。结果表明,使用血清作为目标函数的模型实际上要比其各自的标准增强算法在极端值的预测中产生的模型更好。这证实了血清可以作为损失函数嵌入到基于优化的学习算法中,以实现不平衡的回归方案。

Imbalanced domain learning aims to produce accurate models in predicting instances that, though underrepresented, are of utmost importance for the domain. Research in this field has been mainly focused on classification tasks. Comparatively, the number of studies carried out in the context of regression tasks is negligible. One of the main reasons for this is the lack of loss functions capable of focusing on minimizing the errors of extreme (rare) values. Recently, an evaluation metric was introduced: Squared Error Relevance Area (SERA). This metric posits a bigger emphasis on the errors committed at extreme values while also accounting for the performance in the overall target variable domain, thus preventing severe bias. However, its effectiveness as an optimization metric is unknown. In this paper, our goal is to study the impacts of using SERA as an optimization criterion in imbalanced regression tasks. Using gradient boosting algorithms as proof of concept, we perform an experimental study with 36 data sets of different domains and sizes. Results show that models that used SERA as an objective function are practically better than the models produced by their respective standard boosting algorithms at the prediction of extreme values. This confirms that SERA can be embedded as a loss function into optimization-based learning algorithms for imbalanced regression scenarios.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源