Tao Liu,Gang Wan,Hongyang Bai,Xiaofang Kong,Beibei Tang,F. Wang
出处
期刊:IEEE Transactions on Instrumentation and Measurement [Institute of Electrical and Electronics Engineers] 日期:2024-01-01卷期号:73: 1-13
标识
DOI:10.1109/tim.2023.3342849
摘要
With numerous applications in the field of intelligent unmanned systems for flight patrol, airborne cameras have become crucial tools for measuring and tracking targets. However, the video captured by these cameras is susceptible to external disturbances and jitter, and traditional stabilizers often fail to accurately extract image feature points. Although deep learning approaches can stabilize videos, they are constrained by limited datasets and weak model controllability, making it challenging to achieve real-time performance. We introduce a SuperPoint stabilization framework based on deep learning feature point detection. By combining traditional and deep learning methods, our approach aims to construct a controllable and real-time video stabilizer. Firstly, we extract image feature points using SuperPoint neural network, which is better than traditional manual feature point detector. Secondly, the extracted image feature points are homogenized. Thirdly, we adopt the Pyramid Lucas-Kanade(LK) to improve feature points matching speed and the accuracy for motion estimation. Finally, we define the moving average filter and the Kalman filter respectively and combine them to smooth unstable camera trajectories, then output stable video sequences after motion compensation. Experimental results show that our proposed method produces competitive results with current representative methods and more importantly, it only takes an average of 32 ms to stabilize a frame, which is faster than others.