论文标题
viscuit:CNN图像分类器中偏置的视觉审核员
VisCUIT: Visual Auditor for Bias in CNN Image Classifier
论文作者
论文摘要
CNN图像分类器由于其效率和准确性而被广泛使用。但是,他们可能会遭受阻碍其实际应用的偏见。大多数现有的偏见调查技术要么不适用于一般图像分类任务,要么需要大量的用户努力,以仔细研究所有数据亚组以手动指定要检查的数据属性。我们提出了Viscuit,这是一个交互式可视化系统,揭示了CNN分类器的偏置方式和原因。 Viscuit在视觉上总结了分类器表现不佳的子组,并通过揭示负责激活导致错误分类的神经元的图像概念来帮助用户发现和表征表现不佳的原因。 Viscuit在现代浏览器中运行,并且是开源的,使人们可以轻松地访问和将工具扩展到其他模型体系结构和数据集。可在以下公共演示链接上获得viscuit:https://poloclub.github.io/viscuit。可以通过https://youtu.be/endbsym4r_4获得视频演示。
CNN image classifiers are widely used, thanks to their efficiency and accuracy. However, they can suffer from biases that impede their practical applications. Most existing bias investigation techniques are either inapplicable to general image classification tasks or require significant user efforts in perusing all data subgroups to manually specify which data attributes to inspect. We present VisCUIT, an interactive visualization system that reveals how and why a CNN classifier is biased. VisCUIT visually summarizes the subgroups on which the classifier underperforms and helps users discover and characterize the cause of the underperformances by revealing image concepts responsible for activating neurons that contribute to misclassifications. VisCUIT runs in modern browsers and is open-source, allowing people to easily access and extend the tool to other model architectures and datasets. VisCUIT is available at the following public demo link: https://poloclub.github.io/VisCUIT. A video demo is available at https://youtu.be/eNDbSyM4R_4.