强化学习
蒸馏
计算机科学
调度(生产过程)
人工智能
算法
数学优化
数学
化学
有机化学
作者
Haipeng Xiao,Lijun Fu,Chengya Shang,Xianqiang Bao,Xinghua Xu
出处
期刊:IEEE Transactions on Transportation Electrification
日期:2024-01-01
卷期号:: 1-1
标识
DOI:10.1109/tte.2024.3398991
摘要
Ship optimization scheduling, using deep reinforcement learning (DRL), has been extensively researched and implemented. Notably, the deep Q-learning algorithm (DQN) has achieved successful deployment within the optimization scheduling domain. However, there is currently almost no research on compressing and accelerating DQN-based All-Electric Ships (AES) energy scheduling models. This paper proposes a DQN knowledge distillation (DQN-KD) compression algorithm that incorporates the teacher replay memory pool (T-rpm) learning mechanism for compression problem of the DQN-based optimization scheduling model of AES. The DQN-KD algorithm can effectively transfer the knowledge of teacher agent to student agent, and further improve the training efficiency and performance of student agent using the T-rpm learning mechanism. The experimental results conduct on the AES system demonstrate that our proposed compression method is highly effective. Comparing with the teacher model, the Parameters, FLOP and Memory of the student model are significantly reduced by 87.7%, 92.61% and 88.3% respectively. Interestingly, despite these significant reductions, the student agent only experiences a marginal increase of 0.33% in economic consumption compared to the teacher agent. Furthermore, when the Parameters of the student agent are further reduced by 47.5%, and FLOPs by 50.4%, along with a 47.3% reduction in Memory, the resulting increase in economic consumption is only 0.59% compared to the teacher agent. Importantly, even with these notable reductions, the compressed agent maintained strong generalization performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI