人工智能
姿势
计算机科学
单眼
特征提取
计算机视觉
卷积神经网络
端到端原则
模式识别(心理学)
目标检测
作者
Fengyi Liu,Z. Zhang,Sijue Li
出处
期刊:IEEE Transactions on Aerospace and Electronic Systems
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:: 1-18
标识
DOI:10.1109/taes.2023.3332075
摘要
The monocular vision-based pose estimation of non-cooperative target spacecraft is vital for tasks like on-orbit servicing and debris removal. While deep learning has improved monocular spacecraft pose estimation, existing methods suffer from limitations. First, the prevailing two-stage methods separate object detection and pose estimation processes, lacking end-to-end training and involving redundant feature extraction. Second, an over-reliance on convolutional neural networks (CNNs) can result in excessive dependence on texture and inadequate long-range dependency modeling. To address these drawbacks, we propose a Deformable Transformer-based Single-stage End-to-end SpaceNet (DTSE-SpaceNet). This network dynamically fuses features from multiple scales to predict keypoints, from which pose parameters are derived using the Perspective-n-Points (PnP) method. Furthermore, a novel shape loss function improves keypoint geometric accuracy and reduces outliers and enhancing performance. Extensive experiments on multiple public benchmark datasets demonstrate competitive performance and strong generalization capability, with computation and parameter advantages over two-stage methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI