计算机科学
全球定位系统
杠杆(统计)
计算机视觉
人工智能
智能交通系统
目标检测
流量(计算机网络)
深度学习
实时计算
作者
Ashutosh Kumar,Takehiro Kashiyama,Hiroya Maeda,Yoshihide Sekimoto
标识
DOI:10.1109/bigdata52589.2021.9671751
摘要
Analysis of traffic fl ow pa rameters is ne cessary for Intelligent Transportation Systems (ITS) and autonomous driving research. Deep learning-based vehicle detection techniques have been widely used in reconstructing traffic fl ow parameters from video images. This research proposes a novel cross-sectional traffic fl ow es timation al gorithm to re construct tr affic volume from moving camera videos. We develop a vehicle detection dataset with more than one million annotations of vehicles with orientation and train a YOLOv4 based object detection network. We leverage the accurate vehicle detection model in tracking and estimating the distance of detected vehicles using Simple Online and Realtime Tracking (SORT) and photogrammetry techniques. The estimated distances and forward bearing of the observing vehicle are then utilized to calculate the GPS position of detected vehicles and used in the algorithm to estimate cross-sectional traffic fl ow. We ut ilize th e pr oposed al gorithm to es timate the traffic flow of 580 OpenStreetMap (OSM) road links and achieve an average accuracy of 84.30% verified a gainst 1 1 t raffic police sensor data in Susono city in Japan. The proposed large-scale dataset and cross-sectional traffic flow estimation algorithm open new avenues for ITS and autonomous driving research.
科研通智能强力驱动
Strongly Powered by AbleSci AI