模态(人机交互)
深度学习
人工智能
计算机科学
特征(语言学)
卷积神经网络
特征学习
分割
模式识别(心理学)
过程(计算)
机器学习
语言学
操作系统
哲学
作者
Dingwen Zhang,Guohai Huang,Qiang Zhang,Jungong Han,Junwei Han,Yizhou Yu
标识
DOI:10.1016/j.patcog.2020.107562
摘要
Recent advances in machine learning and prevalence of digital medical images have opened up an opportunity to address the challenging brain tumor segmentation (BTS) task by using deep convolutional neural networks. However, different from the RGB image data that are very widespread, the medical image data used in brain tumor segmentation are relatively scarce in terms of the data scale but contain the richer information in terms of the modality property. To this end, this paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data. The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale. The proposed cross-modality deep feature learning framework consists of two learning processes: the cross-modality feature transition (CMFT) process and the cross-modality feature fusion (CMFF) process, which aims at learning rich feature representations by transiting knowledge across different modality data and fusing knowledge from different modality data, respectively. Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance when compared with the baseline methods and state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI