论文标题

具有量化的自我牵键深度神经网络的可验证和节能医学图像分析

Verifiable and Energy Efficient Medical Image Analysis with Quantised Self-attentive Deep Neural Networks

论文作者

Sathish, Rakshith, Khare, Swanand, Sheet, Debdoot

论文摘要

卷积神经网络在分类和分割等各种医学成像任务中发挥了重要作用。与经典图像处理算法相比,它们提供了最先进的性能。但是,这些方法的主要缺点是高计算复杂性,依赖GPU(例如GPU)的高性能硬件以及模型的固有的黑盒本质。在本文中,我们提出了基于量化的独立自我注意力模型,以替代传统CNN。在拟议的网络类别中,卷积层被独立的自发层取代,并且在训练后对网络参数进行量化。我们通过实验验证方法在分类和分割任务上的性能。我们观察到$ 50-80 \%$减少了型号的尺寸,$ 60-80 \%$ $较少的参数数量,$ 40-85 \%$ $ $ $ $较少,$ 65-80 \%\%$ $ $ $ $ $ $ $ $更高的能源效率在CPUS推断期间。该代码将在\ href {https://github.com/rakshith2597/quantised-self-antentive-deep-neural-network} {https://github.com/rakshith2597/quantive-self-self-self-self--------neural-network}。

Convolutional Neural Networks have played a significant role in various medical imaging tasks like classification and segmentation. They provide state-of-the-art performance compared to classical image processing algorithms. However, the major downside of these methods is the high computational complexity, reliance on high-performance hardware like GPUs and the inherent black-box nature of the model. In this paper, we propose quantised stand-alone self-attention based models as an alternative to traditional CNNs. In the proposed class of networks, convolutional layers are replaced with stand-alone self-attention layers, and the network parameters are quantised after training. We experimentally validate the performance of our method on classification and segmentation tasks. We observe a $50-80\%$ reduction in model size, $60-80\%$ lesser number of parameters, $40-85\%$ fewer FLOPs and $65-80\%$ more energy efficiency during inference on CPUs. The code will be available at \href {https://github.com/Rakshith2597/Quantised-Self-Attentive-Deep-Neural-Network}{https://github.com/Rakshith2597/Quantised-Self-Attentive-Deep-Neural-Network}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源