Autonomous vehicle identification is one of the emerging applications for vehicle to vehicle (V2V) detection in smart traffic monitors. The prime aim of this chapter is to resolve the existing vehicle identification issues such as lower vehicle detection accuracy, minimum speed detection, and detection of vehicular types. This chapter proposes a deep learning-based approach to extract vehicular type using the YOLOv2 model. In this model, a clustering algorithm (k-means++) employed to group the vehicles within the bounded box with distinct sizes is chosen which is based on the training dataset. Further reducing the losses in length (l) and width (w) of anchor bounding boxes for various 4-wheeled vehicles influence the enhancement in vehicular identification using normalized image data sets. To improve the feature extraction capability of the ImageNet model, the multi-layer feature fusion approach is also being implemented to eliminate the repeated high convolution layers. For mean Average Precision (mAP) estimation, the training of vehicular images data sets using the CompCars and Kaggle vehicular data set is taken from BIT-China. The proposed YOLOv2 model also demonstrates a more superior generalization feature and enhanced extraction capability than the Comp_model. The comparative analysis shows that the proposed model has a better average precision value during V2V detection.