散列函数
计算机科学
流式数据
蒸馏
对偶(语法数字)
动态完美哈希
一致哈希
哈希表
双重哈希
数据挖掘
色谱法
艺术
化学
计算机安全
文学类
作者
Chong-Yu Zhang,Xin Luo,Yu-Wei Zhan,Peng-Fei Zhang,Zhen-Duo Chen,Yongxin Wang,Xun Yang,Xin-Shun Xu
标识
DOI:10.1145/3581783.3612119
摘要
With the continuous generation of massive amounts of multimedia data nowadays, hashing has demonstrated significant potentials for large-scale search. To handle the emerging needs for streaming data retrieval, online hashing is drawing more and more attention. For online scenario, data distribution may change and concept drifts may occur as new data is continuously added to the database. Inevitably, hashing models may lose or disrupt the previously obtained knowledge when learning from new information, which is called the problem of catastrophic forgetting. In this paper, we propose a new online hashing method called Self-distillation Dual-memory Online Hashing with Hash Centers, which is abbreviated to SDOH-HC, to overcome this challenge. Specifically, SDOH-HC contains replay and distillation modules. For replay, a dual-memory mechanism is proposed which involves hash centers and exemplars. For knowledge distillation, we let hash centers distill information from themselves, i.e., the version of last round. Additionally, a new objective function is further built on above modules and is solved discretely to learn hash codes. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI