模式识别(心理学)
机器学习
目标检测
视频跟踪
卷积神经网络
作者
Jinpeng Wang,Yuting Gao,Ke Li,Yiqi Lin,Andy J. Ma,Hao Cheng,Pai Peng,Feiyue Huang,Rongrong Ji,Xing Sun
出处
期刊:Computer Vision and Pattern Recognition
日期:2021-06-01
卷期号:: 11804-11813
被引量:14
标识
DOI:10.1109/cvpr46437.2021.01163
摘要
Self-supervised learning has shown great potentials in improving the video representation ability of deep neural networks by getting supervision from the data itself. However, some of the current methods tend to cheat from the background, i.e., the prediction is highly dependent on the video background instead of the motion, making the model vulnerable to background changes. To mitigate the model reliance towards the background, we propose to remove the background impact by adding the background. That is, given a video, we randomly select a static frame and add it to every other frames to construct a distracting video sample. Then we force the model to pull the feature of the distracting video and the feature of the original video closer, so that the model is explicitly restricted to resist the background influence, focusing more on the motion changes. We term our method as Background Erasing (BE). It is worth noting that the implementation of our method is so simple and neat and can be added to most of the SOTA methods without much efforts. Specifically, BE brings 16.4% and 19.1% improvements with MoCo on the severely biased datasets UCF101 and HMDB51, and 14.5% improvement on the less biased dataset Diving48.
科研通智能强力驱动
Strongly Powered by AbleSci AI