端到端原则
计算机科学
最终用户
计算机视觉
人工智能
万维网
作者
Yi Xu,Yuxin Hu,Zaiwei Zhang,Gregory P. Meyer,Siva Karthik Mustikovela,Siddhartha S Srinivasa,Eric M. Wolff,Xin Huang
出处
期刊:Cornell University - arXiv
日期:2024-12-18
标识
DOI:10.48550/arxiv.2412.14446
摘要
Human drivers rely on commonsense reasoning to navigate diverse and dynamic real-world scenarios. Existing end-to-end (E2E) autonomous driving (AD) models are typically optimized to mimic driving patterns observed in data, without capturing the underlying reasoning processes. This limitation constrains their ability to handle challenging driving scenarios. To close this gap, we propose VLM-AD, a method that leverages vision-language models (VLMs) as teachers to enhance training by providing additional supervision that incorporates unstructured reasoning information and structured action labels. Such supervision enhances the model's ability to learn richer feature representations that capture the rationale behind driving patterns. Importantly, our method does not require a VLM during inference, making it practical for real-time deployment. When integrated with state-of-the-art methods, VLM-AD achieves significant improvements in planning accuracy and reduced collision rates on the nuScenes dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI