论文标题
舞台感知功能对齐网络,用于街头场景的实时语义细分
Stage-Aware Feature Alignment Network for Real-Time Semantic Segmentation of Street Scenes
论文作者
论文摘要
在过去的几年中,基于卷积的神经网络方法在街道场景的语义细分方面取得了长足的进步。一些最近的方法将特征图与减轻它们之间的语义差距相结合并达到高分子的准确性。但是,它们通常在解码器中采用具有相同网络配置的特征对齐模块,从而忽略了特征聚合过程中解码器阶段的不同作用,从而导致复杂的解码器结构。这种方式极大地影响了推理速度。在本文中,我们介绍了基于街道场景的实时语义分割的编码器结构的新颖舞台感知功能对齐网络(SFANET)。具体而言,提出了一个阶段感知的特征比对模块(SFA)有效地对齐和汇总两个相邻的特征图。在SFA中,考虑到每个阶段在解码器中的独特作用,新颖的阶段感知功能增强块(FEB)旨在增强编码器的空间细节和特征图的上下文信息。通过这种方式,我们能够通过非常简单有效的多分支解码器结构来解决未对准问题。此外,制定了一种辅助培训策略,以明确减轻多规模对象问题,而无需在推理阶段带来额外的计算成本。实验结果表明,所提出的SFANET在街道场景的实时语义分段之间表现出良好的平衡。特别是,基于RESNET-18,SFANET分别以37 fps的推理速度和96 fps在挑战性的城市景观和CAMVID测试数据集上仅使用单个GTX 1080TX 1080TX 1080TITI GPU,以37 fps和96 fps的推理速度获得78.1%和74.7%的平均值(MIOU)平均值。
Over the past few years, deep convolutional neural network-based methods have made great progress in semantic segmentation of street scenes. Some recent methods align feature maps to alleviate the semantic gap between them and achieve high segmentation accuracy. However, they usually adopt the feature alignment modules with the same network configuration in the decoder and thus ignore the different roles of stages of the decoder during feature aggregation, leading to a complex decoder structure. Such a manner greatly affects the inference speed. In this paper, we present a novel Stage-aware Feature Alignment Network (SFANet) based on the encoder-decoder structure for real-time semantic segmentation of street scenes. Specifically, a Stage-aware Feature Alignment module (SFA) is proposed to align and aggregate two adjacent levels of feature maps effectively. In the SFA, by taking into account the unique role of each stage in the decoder, a novel stage-aware Feature Enhancement Block (FEB) is designed to enhance spatial details and contextual information of feature maps from the encoder. In this way, we are able to address the misalignment problem with a very simple and efficient multi-branch decoder structure. Moreover, an auxiliary training strategy is developed to explicitly alleviate the multi-scale object problem without bringing additional computational costs during the inference phase. Experimental results show that the proposed SFANet exhibits a good balance between accuracy and speed for real-time semantic segmentation of street scenes. In particular, based on ResNet-18, SFANet respectively obtains 78.1% and 74.7% mean of class-wise Intersection-over-Union (mIoU) at inference speeds of 37 FPS and 96 FPS on the challenging Cityscapes and CamVid test datasets by using only a single GTX 1080Ti GPU.