计算机科学
深度学习
实时计算
人工智能
航空学
工程类
作者
Mohammad Amir,Ahteshamul Haque,Zaheer-ud Din
出处
期刊:CRC Press eBooks
[Informa]
日期:2023-12-20
卷期号:: 157-176
标识
DOI:10.1201/9781032669809-7
摘要
Autonomous vehicle identification is one of the emerging applications for vehicle to vehicle (V2V) detection in smart traffic monitors. The prime aim of this chapter is to resolve the existing vehicle identification issues such as lower vehicle detection accuracy, minimum speed detection, and detection of vehicular types. This chapter proposes a deep learning-based approach to extract vehicular type using the YOLOv2 model. In this model, a clustering algorithm (k-means++) employed to group the vehicles within the bounded box with distinct sizes is chosen which is based on the training dataset. Further reducing the losses in length (l) and width (w) of anchor bounding boxes for various 4-wheeled vehicles influence the enhancement in vehicular identification using normalized image data sets. To improve the feature extraction capability of the ImageNet model, the multi-layer feature fusion approach is also being implemented to eliminate the repeated high convolution layers. For mean Average Precision (mAP) estimation, the training of vehicular images data sets using the CompCars and Kaggle vehicular data set is taken from BIT-China. The proposed YOLOv2 model also demonstrates a more superior generalization feature and enhanced extraction capability than the Comp_model. The comparative analysis shows that the proposed model has a better average precision value during V2V detection.
科研通智能强力驱动
Strongly Powered by AbleSci AI