计算机科学
杠杆(统计)
眼动
凝视
推论
人工智能
深度学习
机器学习
卷积神经网络
视觉搜索
计算机视觉
作者
Moinak Bhattacharya,Shubham Jain,Prateek Prasanna
出处
期刊:Cornell University - arXiv
日期:2022-01-01
标识
DOI:10.48550/arxiv.2202.11781
摘要
In this work, we present RadioTransformer, a novel visual attention-driven transformer framework, that leverages radiologists' gaze patterns and models their visuo-cognitive behavior for disease diagnosis on chest radiographs. Domain experts, such as radiologists, rely on visual information for medical image interpretation. On the other hand, deep neural networks have demonstrated significant promise in similar tasks even where visual interpretation is challenging. Eye-gaze tracking has been used to capture the viewing behavior of domain experts, lending insights into the complexity of visual search. However, deep learning frameworks, even those that rely on attention mechanisms, do not leverage this rich domain information. RadioTransformer fills this critical gap by learning from radiologists' visual search patterns, encoded as 'human visual attention regions' in a cascaded global-focal transformer framework. The overall 'global' image characteristics and the more detailed 'local' features are captured by the proposed global and focal modules, respectively. We experimentally validate the efficacy of our student-teacher approach for 8 datasets involving different disease classification tasks where eye-gaze data is not available during the inference phase. Code: https://github.com/bmi-imaginelab/radiotransformer.
科研通智能强力驱动
Strongly Powered by AbleSci AI