强化学习
人在回路中
人工智能
稳健性(进化)
计算机科学
适应性
自动化
过程(计算)
机器学习
控制工程
工程类
生物
机械工程
基因
操作系统
生物化学
化学
生态学
作者
Jingda Wu,Zhiyu Huang,Zhongxu Hu,Chen Lv
出处
期刊:Engineering
[Elsevier]
日期:2022-07-20
卷期号:21: 75-91
被引量:54
标识
DOI:10.1016/j.eng.2022.05.017
摘要
Due to its limited intelligence and abilities, machine learning is currently unable to handle various situations thus cannot completely replace humans in real-world applications. Because humans exhibit robustness and adaptability in complex scenarios, it is crucial to introduce humans into the training loop of artificial intelligence (AI), leveraging human intelligence to further advance machine learning algorithms. In this study, a real-time human-guidance-based (Hug)-deep reinforcement learning (DRL) method is developed for policy training in an end-to-end autonomous driving case. With our newly designed mechanism for control transfer between humans and automation, humans are able to intervene and correct the agent’s unreasonable actions in real time when necessary during the model training process. Based on this human-in-the-loop guidance mechanism, an improved actor-critic architecture with modified policy and value networks is developed. The fast convergence of the proposed Hug-DRL allows real-time human guidance actions to be fused into the agent’s training loop, further improving the efficiency and performance of DRL. The developed method is validated by human-in-the-loop experiments with 40 subjects and compared with other state-of-the-art learning approaches. The results suggest that the proposed method can effectively enhance the training efficiency and performance of the DRL algorithm under human guidance without imposing specific requirements on participants’ expertise or experience.
科研通智能强力驱动
Strongly Powered by AbleSci AI