强化学习
深度学习
人工神经网络
行驶循环
电动汽车
计算机科学
人工智能
能源管理
工程类
功率(物理)
能量(信号处理)
数学
量子力学
统计
物理
作者
Basel Jouda,Ahmad Jobran Al-Mahasneh,Mohammed Abu Mallouh
标识
DOI:10.1016/j.enconman.2023.117973
摘要
Fuel cell hybrid electric vehicles offer a promising solution for sustainable and environment friendly transportation, but they necessitate efficient energy management strategies (EMSs) to optimize their fuel economy. However, designing an optimal leaning-based EMS becomes challenging in the presence of limited training data. This paper presents a deep stochastic reinforcement learning based approach to address this issue of epistemic uncertainty in a midsize fuel cell hybrid electric vehicle. The approach introduces a deep REINFORCE framework with a deep neural network baseline and entropy regularization to develop a stochastic policy for EMS. The performance of the proposed approach is benchmarked against three EMSs: i) a state-of- art deep deterministic reinforcement learning technique called Double Deep Q-Network (DDQN), Power Follower Controller (PFC) and Fuzzy Logic Controller (FLC). Using New York City cycle as a validation drive cycle, the deep REINFORCE approach improves fuel economy by 7.68%, 13.53%, and 10% compared to DDQN, PFC, and FLC, respectively. The deep REINFORCE approach improves fuel economy by 5.31 %,9.78 %, and 9.93 % compared to DDQN, PFC, and FLC, respectively under another validation cycle, Amman cycle. Moreover, the training results show that the proposed algorithm reduces training time by 38% compared to the DDQN approach. The proposed deep REINFORCE-based EMS shows superiority not only in terms of fuel economy, but also in terms of dealing with epistemic uncertainty.
科研通智能强力驱动
Strongly Powered by AbleSci AI