计算机科学
人工智能
遮罩(插图)
抽象
编码(集合论)
水准点(测量)
代表(政治)
源代码
接头(建筑物)
自然语言处理
特征学习
计算机视觉
领域(数学分析)
模式识别(心理学)
机器学习
程序设计语言
法学
集合(抽象数据类型)
地理
建筑工程
数学分析
视觉艺术
艺术
哲学
大地测量学
工程类
操作系统
认识论
政治
数学
政治学
作者
Zhihong Chen,Yuhao Du,Jinpeng Hu,Yang Liu,Guanbin Li,Xiang Wan,Tsung‐Hui Chang
标识
DOI:10.1016/j.media.2023.103018
摘要
Recently, masked autoencoders have demonstrated their feasibility in extracting effective image and text features (e.g., BERT for natural language processing (NLP) and MAE in computer vision (CV)). This study investigates the potential of applying these techniques to vision-and-language representation learning in the medical domain. To this end, we introduce a self-supervised learning paradigm, multi-modal masked autoencoders (M3AE). It learns to map medical images and texts to a joint space by reconstructing pixels and tokens from randomly masked images and texts. Specifically, we design this approach from three aspects: First, taking into account the varying information densities of vision and language, we employ distinct masking ratios for input images and text, with a notably higher masking ratio for images; Second, we utilize visual and textual features from different layers for reconstruction to address varying levels of abstraction in vision and language; Third, we develop different designs for vision and language decoders. We establish a medical vision-and-language benchmark to conduct an extensive evaluation. Our experimental results exhibit the effectiveness of the proposed method, achieving state-of-the-art results on all downstream tasks. Further analyses validate the effectiveness of the various components and discuss the limitations of the proposed approach. The source code is available at https://github.com/zhjohnchan/M3AE.
科研通智能强力驱动
Strongly Powered by AbleSci AI