人工智能
计算机视觉
计算机科学
雷达
传感器融合
惯性测量装置
概率逻辑
投影(关系代数)
雷达工程细节
雷达成像
算法
电信
作者
Zhenglin Li,Tianxin Yuan,Liyan Ma,Yang Zhou,Yan Peng
标识
DOI:10.1109/jsen.2024.3394703
摘要
Unmanned surface vehicles (USVs) have been widely used for a wide range of tasks in the past decades. Accurate perception of the surrounding environment on the water surface under complex conditions is crucial for USVs to conduct effective operations. This paper proposes a radar-vision fusion framework for USVs to accurately detect typical targets on the water surface. The modality difference between images and radar measurements, along with their perpendicular coordinates presents challenges in the fusion process. The swaying of USVs on water and the extensive areas of perception enhance the difficulties of multi-sensor data association. To address these problems, we propose two modules to enhance multi-sensor fusion performance: a movement-compensated projection module and a distance-aware probabilistic data association module. The former effectively reduces projection bias during the alignment process of radar and camera signals by compensating for sensor movement using measured roll and pitch angles from the inertial measurement unit (IMU). The latter module models target regions guided by each radar measurement as a bivariate Gaussian distribution, with its covariance matrix adaptively derived based on the distance between targets and the camera. Consequently, the association of radar points and images is robust to projection errors and works well for multi-scale objects. Features of radar points and images are subsequently extracted with two parallel backbones and fused at different levels to provide sufficient semantic information for robust object detection. The proposed framework achieves an AP of 0.501 on the challenging real-world dataset established by us, outperforming state-of-the-art vision-only and radar-vision fusion methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI