A Preliminary Study of Deep Learning Sensor Fusion for Pedestrian Detection

人工智能 激光雷达 计算机科学 计算机视觉 卷积神经网络 行人检测 RGB颜色模型 深度学习 雷达 目标检测 交叉口(航空) 背景(考古学) 像素 分割 遥感 行人 工程类 地理 电信 考古 航空航天工程 运输工程
作者
Alfredo Chávez Plascencia,Pablo García-Gómez,Eduardo Bernal Perez,Gerard DeMas-Giménez,Josep R. Casas,Santiago Royo
出处
期刊:Sensors [MDPI AG]
卷期号:23 (8): 4167-4167 被引量:2
标识
DOI:10.3390/s23084167
摘要

Most pedestrian detection methods focus on bounding boxes based on fusing RGB with lidar. These methods do not relate to how the human eye perceives objects in the real world. Furthermore, lidar and vision can have difficulty detecting pedestrians in scattered environments, and radar can be used to overcome this problem. Therefore, the motivation of this work is to explore, as a preliminary step, the feasibility of fusing lidar, radar, and RGB for pedestrian detection that potentially can be used for autonomous driving that uses a fully connected convolutional neural network architecture for multimodal sensors. The core of the network is based on SegNet, a pixel-wise semantic segmentation network. In this context, lidar and radar were incorporated by transforming them from 3D pointclouds into 2D gray images with 16-bit depths, and RGB images were incorporated with three channels. The proposed architecture uses a single SegNet for each sensor reading, and the outputs are then applied to a fully connected neural network to fuse the three modalities of sensors. Afterwards, an up-sampling network is applied to recover the fused data. Additionally, a custom dataset of 60 images was proposed for training the architecture, with an additional 10 for evaluation and 10 for testing, giving a total of 80 images. The experiment results show a training mean pixel accuracy of 99.7% and a training mean intersection over union of 99.5%. Also, the testing mean of the IoU was 94.4%, and the testing pixel accuracy was 96.2%. These metric results have successfully demonstrated the effectiveness of using semantic segmentation for pedestrian detection under the modalities of three sensors. Despite some overfitting in the model during experimentation, it performed well in detecting people in test mode. Therefore, it is worth emphasizing that the focus of this work is to show that this method is feasible to be used, as it works regardless of the size of the dataset. Also, a bigger dataset would be necessary to achieve a more appropiate training. This method gives the advantage of detecting pedestrians as the human eye does, thereby resulting in less ambiguity. Additionally, this work has also proposed an extrinsic calibration matrix method for sensor alignment between radar and lidar based on singular value decomposition.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
无花果应助科研通管家采纳,获得10
1秒前
1秒前
隐形曼青应助科研通管家采纳,获得10
1秒前
1秒前
Pzs应助科研通管家采纳,获得10
1秒前
6秒前
10秒前
12秒前
专注的小松鼠完成签到,获得积分10
15秒前
积极的小馒头应助lala采纳,获得10
16秒前
辞忧完成签到,获得积分10
16秒前
所所应助xieyuanlong采纳,获得10
18秒前
搞怪代桃完成签到 ,获得积分10
22秒前
傅双庆应助趙途嘵生采纳,获得10
22秒前
haohao发布了新的文献求助10
25秒前
28秒前
28秒前
123发布了新的文献求助10
28秒前
28秒前
32秒前
33秒前
33秒前
SciGPT应助一颗肉丸采纳,获得10
35秒前
Max_Black发布了新的文献求助10
38秒前
藏识发布了新的文献求助200
44秒前
45秒前
45秒前
47秒前
50秒前
叉叉茶发布了新的文献求助10
52秒前
53秒前
123发布了新的文献求助10
53秒前
ding应助虚拟的平安采纳,获得10
53秒前
56秒前
59秒前
李健应助123采纳,获得10
1分钟前
侯永慧发布了新的文献求助10
1分钟前
1分钟前
就这样完成签到,获得积分10
1分钟前
乐观的涵柳完成签到 ,获得积分10
1分钟前
高分求助中
LNG地上式貯槽指針 (JGA指 ; 108) 1000
LNG地下式貯槽指針(JGA指-107)(LNG underground storage tank guidelines) 1000
Generalized Linear Mixed Models 第二版 1000
Preparation and Characterization of Five Amino-Modified Hyper-Crosslinked Polymers and Performance Evaluation for Aged Transformer Oil Reclamation 700
Operative Techniques in Pediatric Orthopaedic Surgery 510
九经直音韵母研究 500
Full waveform acoustic data processing 500
热门求助领域 (近24小时)
化学 医学 材料科学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 免疫学 细胞生物学 电极
热门帖子
关注 科研通微信公众号,转发送积分 2927774
求助须知:如何正确求助?哪些是违规求助? 2577011
关于积分的说明 6955285
捐赠科研通 2227692
什么是DOI,文献DOI怎么找? 1184025
版权声明 589370
科研通“疑难数据库(出版商)”最低求助积分说明 579388