论文标题

对应用内深度学习模型的自动切片和测试

Automation Slicing and Testing for in-App Deep Learning Models

论文作者

Wu, Hao, Gong, Yuhang, Ke, Xiaopeng, Liang, Hanzhong, Li, Minghao, Xu, Fengyuan, Liu, Yunxin, Zhong, Sheng

论文摘要

配备了应用程序内深度学习(DL)模型的智能应用程序(IAPPS)正在新兴提供稳定的DL推理服务。但是,应用程序市场在自动测试IAPPS方面遇到麻烦,因为应用程序内模型是带有普通代码的夫妻。在这项工作中,我们提出了一个自动化工具ASTM,该工具可以实现对应用内模型的大规模测试。 ASTM作为输入IAPPS,输出可以替换应用程序内模型作为测试对象。 ASTM提出了两种重建技术,以将应用程序内模型转换为启用反向Propagation版本,并重建用于DL推理的IO处理代码。在ASTM的帮助下,我们对100个独特的商业内应用内模型的鲁棒性进行了大规模研究,发现56 \%的应用程序内模型在我们的背景下容易受到鲁棒性问题的影响。 ASTM还检测到对可能造成经济损失和安全问题的三个代表性IAPPS的身体攻击。

Intelligent Apps (iApps), equipped with in-App deep learning (DL) models, are emerging to offer stable DL inference services. However, App marketplaces have trouble auto testing iApps because the in-App model is black-box and couples with ordinary codes. In this work, we propose an automated tool, ASTM, which can enable large-scale testing of in-App models. ASTM takes as input an iApps, and the outputs can replace the in-App model as the test object. ASTM proposes two reconstruction techniques to translate the in-App model to a backpropagation-enabled version and reconstruct the IO processing code for DL inference. With the ASTM's help, we perform a large-scale study on the robustness of 100 unique commercial in-App models and find that 56\% of in-App models are vulnerable to robustness issues in our context. ASTM also detects physical attacks against three representative iApps that may cause economic losses and security issues.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源