论文标题

自适应贴片退出,可扩展单图超分辨率

Adaptive Patch Exiting for Scalable Single Image Super-Resolution

论文作者

Wang, Shizun, Liu, Jiaming, Chen, Kaixin, Li, Xiaoqi, Lu, Ming, Guo, Yandong

论文摘要

由于计算的未来是异质的,因此可伸缩性是单图超分辨率的关键问题。最近的工作试图训练一个网络,该网络可以部署在具有不同能力的平台上。但是,他们依赖于像素稀疏卷积,这不是对硬件友好的,并且实现了有限的实际加速。由于可以将图像分为各种恢复困难的贴片,因此我们提出了一种基于自适应贴片(APE)的可扩展方法,以实现更实用的加速。具体而言,我们建议训练回归器,以预测贴片每一层的增量能力。一旦增量容量低于阈值,贴片可以在特定层中退出。我们的方法可以通过改变增量容量的阈值来轻松调整性能和效率之间的权衡。此外,我们提出了一种新的策略,以实现我们方法的网络培训。我们在各种骨架,数据集和缩放因素上进行了广泛的实验,以证明我们方法的优势。代码可从https://github.com/littlepure2333/ape获得

Since the future of computing is heterogeneous, scalability is a crucial problem for single image super-resolution. Recent works try to train one network, which can be deployed on platforms with different capacities. However, they rely on the pixel-wise sparse convolution, which is not hardware-friendly and achieves limited practical speedup. As image can be divided into patches, which have various restoration difficulties, we present a scalable method based on Adaptive Patch Exiting (APE) to achieve more practical speedup. Specifically, we propose to train a regressor to predict the incremental capacity of each layer for the patch. Once the incremental capacity is below the threshold, the patch can exit at the specific layer. Our method can easily adjust the trade-off between performance and efficiency by changing the threshold of incremental capacity. Furthermore, we propose a novel strategy to enable the network training of our method. We conduct extensive experiments across various backbones, datasets and scaling factors to demonstrate the advantages of our method. Code is available at https://github.com/littlepure2333/APE

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源