计算机科学
人工智能
深度学习
模式识别(心理学)
机器学习
过程(计算)
模式(计算机接口)
分类器(UML)
特征(语言学)
特征学习
特征向量
特征提取
上下文图像分类
图像(数学)
数据挖掘
操作系统
哲学
语言学
作者
Fenghong Yang,Runqing Jiang,Yan Yan,Jing‐Hao Xue,Biao Wang,Hanzi Wang
标识
DOI:10.1109/tifs.2024.3364368
摘要
With the recent advance of deep learning, a large number of methods have been developed for prohibited item detection in X-ray security images. Generally, these methods train models on a single X-ray image dataset that may contain only limited categories of prohibited items. To detect more prohibited items, it is desirable to train a model on the multi-dataset that is constructed by combining multiple datasets. However, directly applying existing methods to the multi-dataset cannot guarantee good performance because of the large domain discrepancy between datasets and the occlusion in images. To address the above problems, we propose a novel Dual-Mode Learning Network (DML-Net) to effectively detect all the prohibited items in the multi-dataset. In particular, we develop an enhanced RetinaNet as the architecture of DML-Net, where we introduce a lattice appearance enhanced sub-net to enhance appearance representations. Such a way benefits the detection of occluded prohibited items. Based on the enhanced RetinaNet, the learning process of DML-Net involves both common mode learning (detecting the common prohibited items across datasets) and unique mode learning (detecting the unique prohibited items in each dataset). For common mode learning, we introduce an adversarial prototype alignment module to align the feature prototypes from different datasets in the domain-invariant feature space. For unique mode learning, we take advantage of feature distillation to enforce the student model to mimic the features extracted by multiple pre-trained teacher models. By tightly combining and jointly training the dual modes, our DML-Net method successfully eliminates the domain discrepancy and exhibits superior model capacity on the multi-dataset. Extensive experimental results on several combined X-ray image datasets demonstrate the effectiveness of our method against several state-of-the-art methods. Our code is available at https://github.com/vampirename/dmlnet.
科研通智能强力驱动
Strongly Powered by AbleSci AI