血栓
光学相干层析成像
人工智能
计算机科学
分割
判别式
稳健性(进化)
计算机视觉
模式识别(心理学)
医学
放射科
心脏病学
生物化学
基因
化学
作者
Miao Chu,Giovanni Luigi De Maria,R H Dai,Stefano Benenati,Wei Yu,Jiaxin Zhong,Rafail A. Kotronias,Jason Walsh,Stefano Andreaggi,Vittorio Zuccarelli,Jason Chai,Keith M. Channon,Adrian Banning,Shengxian Tu
标识
DOI:10.1016/j.media.2024.103265
摘要
Acute coronary syndromes (ACS) are one of the leading causes of mortality worldwide, with atherosclerotic plaque rupture and subsequent thrombus formation as the main underlying substrate. Thrombus burden evaluation is important for tailoring treatment therapy and predicting prognosis. Coronary optical coherence tomography (OCT) enables in-vivo visualization of thrombus that cannot otherwise be achieved by other image modalities. However, automatic quantification of thrombus on OCT has not been implemented. The main challenges are due to the variation in location, size and irregularities of thrombus in addition to the small data set. In this paper, we propose a novel dual-coordinate cross-attention transformer network, termed DCCAT, to overcome the above challenges and achieve the first automatic segmentation of thrombus on OCT. Imaging features from both Cartesian and polar coordinates are encoded and fused based on long-range correspondence via multi-head cross-attention mechanism. The dual-coordinate cross-attention block is hierarchically stacked amid convolutional layers at multiple levels, allowing comprehensive feature enhancement. The model was developed based on 5,649 OCT frames from 339 patients and tested using independent external OCT data from 548 frames of 52 patients. DCCAT achieved Dice similarity score (DSC) of 0.706 in segmenting thrombus, which is significantly higher than the CNN-based (0.656) and Transformer-based (0.584) models. We prove that the additional input of polar image not only leverages discriminative features from another coordinate but also improves model robustness for geometrical transformation.Experiment results show that DCCAT achieves competitive performance with only 10% of the total data, highlighting its data efficiency. The proposed dual-coordinate cross-attention design can be easily integrated into other developed Transformer models to boost performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI