计算机科学
散列函数
人工智能
模式识别(心理学)
融合
变压器
比例(比率)
工程类
电气工程
地图学
计算机安全
语言学
哲学
电压
地理
标识
DOI:10.1109/icassp49357.2023.10094794
摘要
The deep image hashing aims to map the input image into simply binary hash codes via deep neural networks. Motivated by the recent advancements of Vision Transformers (ViT), many deep hashing methods based on ViT have been proposed. Nevertheless, the ViT has enormous number of model parameters and high computational complexity. Moreover, the last layer of the ViT outputs only the classification tokens as image feature vectors, while the rest of the vectors are discarded. This results in the inefficiency of the model computation and the neglect of useful image information. Therefore, this paper proposes a Transformer-based deep hashing method for multi-scale feature fusion (TDH). Specifically, we use a hierarchical Transformer backbone to capture both global and local features of images. The hierarchical Transformer utilizes a local self-attention mechanism to process image blocks in parallel, which reduces computational complexity and promotes computational efficiency. Multi-scale feature fusion module captures all image feature vectors of the hierarchical Transformer output to obtain more enriched image feature information. We perform comprehensive experiments on three widely-studied datasets: CIFAR-10, NUS-WIDE and IMAGENET. The experimental results demonstrate that the proposed method in this paper indicates superior results compared to the existing state-of-the-art work. Source code is available https://github.com/shuaichaochao/TDH.
科研通智能强力驱动
Strongly Powered by AbleSci AI