作者
Pengfei Song,Jinjiang Li,Hui Fan,Linwei Fan
摘要
Accurate and automatic segmentation of medical images is a key step in clinical diagnosis and analysis. Currently, the successful application of Transformers' model in the field of computer vision, researchers have begun to gradually explore the application of Transformers in medical segmentation of images, especially in combination with convolutional neural networks with coding–decoding structure, which have achieved remarkable results in the field of medical segmentation. However, most studies have combined Transformers with CNNs at a single scale or processed only the highest-level semantic feature information, ignoring the rich location information in the lower-level semantic feature information. At the same time, for problems such as blurred structural boundaries and heterogeneous textures in images, most existing methods usually simply connect contour information to capture the boundaries of the target. However, these methods cannot capture the precise outline of the target and ignore the potential relationship between the boundary and the region. In this paper, we propose the TGDAUNet, which consists of a dual-branch backbone network of CNNs and Transformers and a parallel attention mechanism, to achieve accurate segmentation of lesions in medical images. Firstly, high-level semantic feature information of the CNN backbone branches is fused at multiple scales, and the high-level and low-level feature information complement each other's location and spatial information. We further use the polarised self-attentive (PSA) module to reduce the impact of redundant information caused by multiple scales, to better couple with the feature information extracted from the Transformers backbone branch, and to establish global contextual long-range dependencies at multiple scales. In addition, we have designed the Reverse Graph-reasoned Fusion (RGF) module and the Feature Aggregation (FA) module to jointly guide the global context. The FA module aggregates high-level semantic feature information to generate an original global predictive segmentation map. The RGF module captures non-significant features of the boundaries in the original or secondary global prediction segmentation graph through a reverse attention mechanism, establishing a graph reasoning module to explore the potential semantic relationships between boundaries and regions, further refining the target boundaries. Finally, to validate the effectiveness of our proposed method, we compare our proposed method with the current popular methods in the CVC-ClinicDB, Kvasir-SEG, ETIS, CVC-ColonDB, CVC-300,datasets as well as the skin cancer segmentation datasets ISIC-2016 and ISIC-2017. The large number of experimental results show that our method outperforms the currently popular methods. Source code is released at https://github.com/sd-spf/TGDAUNet.