保险丝(电气)
感应(电子)
雷达
低空
磁道(磁盘驱动器)
融合
计算机科学
航空学
高度(三角形)
航空航天工程
计算机视觉
人工智能
工程类
电信
电气工程
数学
哲学
操作系统
语言学
几何学
作者
Federica Vitiello,Flavia Causa,Roberto Opromolla,Giancarmine Fasano
标识
DOI:10.1016/j.ast.2024.108946
摘要
Non-cooperative Sense and Avoid is a critical technology for the safety and autonomy of Unmanned Aerial Vehicles (UAV). Standalone sensing solutions, i.e., only based on either visual cameras or radars, encounter challenges especially for vehicles flying at low altitude. To overcome this limit, sensor fusion strategies can play a key role. In this framework, this paper proposes a two-step radar/visual sensor fusion approach taking place both at detection and tracking level. The first step, named "Fuse-before-Track", consists in jointly using radar information and visual detections (provided by Convolutional Neural Network-based detectors) to remove uninteresting radar echoes thus improving ground clutter removal and speeding up the radar processing pipeline. At the second level, tracking takes place by exploiting the previously retrieved (confirmed) radar measures and fusing visual detections to improve the solution accuracy. The proposed approach is tested on data collected during experimental flight tests where a ground-fixed multi-sensor setup (integrating a low size weight and power radar and a daylight camera) is used to detect and track a small UAV manually piloted to carry out approaching manoeuvres. Detection and tracking performance is assessed using, as a benchmark, a cm-level relative positioning solution retrieved by means of Carrier Phase Differential GNSS techniques. The implemented detection-level fusion approach ensures radar detection accuracy of meter level and meter-per-second level on range and range rate, respectively. In addition, the second level of fusion allows attaining sub-degree level errors in the angular and angular rates estimates at a tracking stage. Tracking data are finally used for conflict threat assessment, i.e., to get estimates of the distance and time at closest point of approach, with mean errors on the former of about 10 m in most encounters when the latter falls below 50 s.
科研通智能强力驱动
Strongly Powered by AbleSci AI