计算机科学
任务(项目管理)
感知
恶劣天气
人工智能
深度学习
软件部署
计算机视觉
目标检测
相关性(法律)
薄雾
图像(数学)
视觉感受
实时计算
模式识别(心理学)
工程类
物理
系统工程
神经科学
气象学
法学
政治学
生物
操作系统
作者
Younkwan Lee,Jihyo Jeon,Yeongmin Ko,Byunggwan Jeon,Moongu Jeon
标识
DOI:10.1109/icra48506.2021.9561076
摘要
Visual perception in autonomous driving is a crucial part of a vehicle to navigate safely and sustainably in different traffic conditions. However, in bad weather such as heavy rain and haze, the performance of visual perception is greatly affected by several degrading effects. Recently, deep learning-based perception methods have addressed multiple degrading effects to reflect real-world bad weather cases but have shown limited success due to 1) high computational costs for deployment on mobile devices and 2) poor relevance between image enhancement and visual perception in terms of the model ability. To solve these issues, we propose a task-driven image enhancement network connected to the high-level vision task, which takes in an image corrupted by bad weather as input. Specifically, we introduce a novel low memory network to reduce most of the layer connections of dense blocks for less memory and computational cost while maintaining high performance. We also introduce a new task-driven training strategy to robustly guide the high-level task model suitable for both high-quality restoration of images and highly accurate perception. Experiment results demonstrate that the proposed method improves the performance among lane and 2D object detection, and depth estimation largely under adverse weather in terms of both low memory and accuracy.
科研通智能强力驱动
Strongly Powered by AbleSci AI