计算机科学
人工智能
强化学习
背景(考古学)
分类器(UML)
深度学习
机器学习
模式识别(心理学)
作者
Navdeep Kaur,Ajay Mittal
标识
DOI:10.1016/j.compbiomed.2022.105498
摘要
Automated generation of radiological reports for different imaging modalities is essentially required to smoothen the clinical workflow and alleviate radiologists’ workload. It involves the careful amalgamation of image processing techniques for medical image interpretation and language generation techniques for report generation. This paper presents CADxReport, a coattention and reinforcement learning based technique for generating clinically accurate reports from chest x-ray (CXR) images. CADxReport, uses VGG19 network pre-trained over ImageNet dataset and a multi-label classifier for extracting visual and semantic features from CXR images, respectively. The co-attention mechanism with both the features is used to generate a context vector, which is then passed to HLSTM for radiological report generation. The model is trained using reinforcement learning to maximize CIDEr rewards. OpenI dataset, having 7, 470 CXRs along with 3, 955 associated structured radiological reports, is used for training and testing. Our proposed model is able to generate clinically accurate reports from CXR images. The quantitative evaluations confirm satisfactory results by achieving the following performance scores: BLEU-1 = 0.577, BLEU-2 = 0.478, BLEU-3 = 0.403, BLEU-4 = 0.346, ROUGE = 0.618 and CIDEr = 0.380. The evaluation using BLEU, ROUGE, and CIDEr score metrics indicates that the proposed model generates sufficiently accurate CXR reports and outperforms most of the state-of-the-art methods for the given task. • We propose CADxReport, an automatic chest radiographic report generation system. • Uses Co-attention mechanism to attends both visual and semantic features. • Model is reinforced using CIDEr rewards to generate clinically correct reports. • CADxReport outperforms various state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI