计算机科学
人工智能
特征(语言学)
图像(数学)
模态(人机交互)
模式识别(心理学)
计算机视觉
发电机(电路理论)
分割
图像分割
职位(财务)
语义学(计算机科学)
医学影像学
维数(图论)
数学
哲学
物理
经济
功率(物理)
量子力学
语言学
程序设计语言
纯数学
财务
作者
Go‐Eun Lee,Seon Ho Kim,Jungchan Cho,Sang Tae Choi,Sang‐Il Choi
标识
DOI:10.1007/978-3-031-43904-9_52
摘要
We propose a novel text-guided cross-position attention module which aims at applying a multi-modality of text and image to position attention in medical image segmentation. To match the dimension of the text feature to that of the image feature map, we multiply learnable parameters by text features and combine the multi-modal semantics via cross-attention. It allows a model to learn the dependency between various characteristics of text and image. Our proposed model demonstrates superior performance compared to other medical models using image-only data or image-text data. Furthermore, we utilize our module as a region of interest (RoI) generator to classify the inflammation of the sacroiliac joints. The RoIs obtained from the model contribute to improve the performance of classification models.
科研通智能强力驱动
Strongly Powered by AbleSci AI