Medical image segmentation based on active fusion-transduction of multi-stream features

作者:

Highlights:

摘要

As an important building block in automatic medical systems, image segmentation has made great progress due to the data-driving mechanism of deep architecture. Recently, numerous methods have been proposed to boost the segmentation performance based on U-shape networks. However, the encoders in these models often share only one data routine, which may disempower the representation ability of the features. Although some existing work applied multiple learning paths to fix this problem, deep supervision techniques are additionally required to monitor the training status at the individual path, which may bring extra burdens to their training. Moreover, under these frameworks, the semantic gap between different paths may interfere with the model’s learning performance, and the multi-granular features learned from the encoder may not be well coordinated for the segmentation task, since the potential transduction ability of skip connections still needs further investigation. To address these issues, we introduce a novel medical image segmentation framework, namely AFT-Net, in which an attention-based data fusion model is proposed to effectively cooperate with our multi-stream encoder, and an Inception Res-Atrous Convolution block is proposed to collect correlated contextual information in the decoding stage. By progressively accumulate the features from different paths, our method can establish meaningful connections between structural and semantic features, while keeping an integral and flexible layout without deeply customized supervision. Extensive experiments on four medical image data sets demonstrate that our method is able to acquire image features with both diversity and quality, thereby outperforms current state-of-the-art segmentation methods.

论文关键词:Medical image segmentation,Multi-stream,Feature fusion,Attention mechanism,Atrous convolution

论文评审过程:Received 25 November 2020, Revised 10 March 2021, Accepted 11 March 2021, Available online 15 March 2021, Version of Record 22 March 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.106950