论文标题

人类知识保护的算法决策

Algorithmic Decision-Making Safeguarded by Human Knowledge

论文作者

Chen, Ningyuan, Hu, Ming, Li, Wenhao

论文摘要

商业AI解决方案为分析师和经理提供了数据驱动的商业智能,以进行各种决策,例如需求预测和定价。但是,人类分析师可能对与算法建议不符的决策有自己的见解和经验。鉴于这种冲突,我们提供了一个一般的分析框架来研究以人类知识的算法决策的增强:分析师使用知识来设定算法,如果算法输出不受限制,则可以将算法决定删除,并且似乎是不合理的。我们研究了相对于原始算法决定,增强性具有有益的条件。我们表明,当算法决定与大数据渐近最佳时,非DATA驱动的人护栏通常没有任何好处。但是,我们指出了算法决定的三个常见陷阱:(1)缺乏领域知识,例如市场竞争,(2)模型错误指定和(3)数据污染。在这些情况下,即使有足够的数据,人类知识的增强仍然可以提高算法决定的性能。

Commercial AI solutions provide analysts and managers with data-driven business intelligence for a wide range of decisions, such as demand forecasting and pricing. However, human analysts may have their own insights and experiences about the decision-making that is at odds with the algorithmic recommendation. In view of such a conflict, we provide a general analytical framework to study the augmentation of algorithmic decisions with human knowledge: the analyst uses the knowledge to set a guardrail by which the algorithmic decision is clipped if the algorithmic output is out of bound, and seems unreasonable. We study the conditions under which the augmentation is beneficial relative to the raw algorithmic decision. We show that when the algorithmic decision is asymptotically optimal with large data, the non-data-driven human guardrail usually provides no benefit. However, we point out three common pitfalls of the algorithmic decision: (1) lack of domain knowledge, such as the market competition, (2) model misspecification, and (3) data contamination. In these cases, even with sufficient data, the augmentation from human knowledge can still improve the performance of the algorithmic decision.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源