障碍物
计算机科学
计算机视觉
遥感
环境科学
人工智能
地理
考古
作者
Xiaomei Li,Xiong Deng,Xiaoyong Wu,Zhijiang Xie
标识
DOI:10.1088/1361-6501/ada4c8
摘要
Abstract As a key step in obstacle avoidance and path planning, obstacle detection via camera sensors is crucial for autonomous driving. The real traffic road environment is complex and variable, and the existing obstacle detection algorithms still have the problem of insufficient sensing ability. Therefore, this work suggests a camera sensors-based Strong Sensing DEtection TRansformer (SS-DETR) obstacle detection model for autonomous driving. Firstly, Receptive-Field Attention ResNet (RFARNet) is designed to improve feature analysis and extraction performance by considering the importance of receptive field spatial features and channels. Then, an intra-scale feature interaction (IFI) module based on multiple information fusion attention (MIFA) is created to strengthen the representation of advanced feature maps. Furthermore, the cross-scale feature-fusion module (CFM) is optimized to extract more detailed information from multi-scale feature maps. Finally, a localization loss function based on L1 and Powerful Intersection over Union (PIoU) v2 is implemented to further boost the detection performance. To verify the efficacy of the suggested model, the KITTI dataset containing camera sensors-based road obstacle images is adopted. The experimental results reveal that compared to Real-Time DETR (RT-DETR), SS-DETR improves mean Average Precision (mAP)@50:95 and mAP@50 by 2.4% and 1.9%, respectively, and has a real-time inference speed of 33.7 frames per second (FPS). To further confirm the generalization ability of the approach, experiments are conducted on the camera sensors-based Cityscapes dataset. The results divulge that the suggested strategy can effectively raise the detection accuracy of obstacles, and offer a fresh perspective on obstacle identification.
科研通智能强力驱动
Strongly Powered by AbleSci AI