计算机科学
适配器(计算)
情态动词
培训(气象学)
计算机网络
人工智能
操作系统
化学
物理
气象学
高分子化学
作者
Ting Yu,Wu Lu,Yan Yang,Weidong Han,Qingming Huang,Jun Yu,Ke Zhang
标识
DOI:10.1109/jbhi.2025.3535699
摘要
Automatic medical report generation is an emerging field that aims to transform medical images into descriptive, clinically relevant narratives, potentially reducing the workload for radiologists significantly. Despite substantial progress, the increasing model parameter size and corresponding marginal performance gains have limited further development and application. To address this challenge, we introduce an Adapter-enhanced Hierarchical cross-modal Pre-training (AHP) strategy for lightweight medical report generation. This approach significantly reduces the pre-trained model's parameter size while maintaining superior report generation performance through our proposed spatial adapters. To further address the issue of inadequate representation of visual space details, we employ a convolutional stem combined with hierarchical injectors and extractors, fully integrating with traditional Vision Transformers to achieve more comprehensive visual representations. Additionally, our cross-modal pre-training model effectively handles the inherent complex visual-textual relationships in medical imaging. Extensive experiments on multiple datasets, including IU X-Ray, MIMIC-CXR, and bladder pathology, demonstrate our model's exceptional generalization and transfer performance in downstream medical report generation tasks, highlighting AHP's potential in significantly reducing model parameters while enhancing report generation accuracy and efficiency. Our code is available on the project page: https://github.com/OpenMICG/AHP.
科研通智能强力驱动
Strongly Powered by AbleSci AI