强化学习
能源管理
燃料效率
行驶循环
计算机科学
汽车工程
能源消耗
氢燃料
模拟
工程类
作者
Xiaolin Tang,Haitao Zhou,Feng Wang,Weida Wang,Xianke Lin
出处
期刊:Energy
[Elsevier]
日期:2022-01-01
卷期号:238: 121593-121593
标识
DOI:10.1016/j.energy.2021.121593
摘要
Deep reinforcement learning-based energy management strategy play an essential role in improving fuel economy and extending fuel cell lifetime for fuel cell hybrid electric vehicles. In this work, the traditional Deep Q-Network is compared with the Deep Q-Network with prioritized experience replay. Furthermore, the Deep Q-Network with prioritized experience replay is designed for energy management strategy to minimize hydrogen consumption and compared with the dynamic programming. Moreover, the fuel cell system degradation is incorporated into the objective function, and a balance between fuel economy and fuel cell system degradation is achieved by adjusting the degradation weight and the hydrogen consumption weight. Finally, the combined driving cycle is selected to further verify the effectiveness of the proposed strategy in unfamiliar driving environments and untrained situations. The training results under UDDS show that the fuel economy of the EMS decreases by 0.53 % when fuel cell system degradation is considered, reaching 88.73 % of the DP-based EMS in the UDDS, and the degradation of fuel cell system is effectively suppressed. At the same time, the computational efficiency is improved by more than 70 % compared to the DP-based strategy. • A deep reinforcement learning energy management framework is developed. • An improved Deep Q-Network algorithm is used for energy management. • A PER-DQN-based energy management that considers the degradation of fuel cell is proposed. • A combined driving cycle is selected to further verify the effectiveness of the proposed strategy.
科研通智能强力驱动
Strongly Powered by AbleSci AI