Car-Following Behavior Modeling With Maximum Entropy Deep Inverse Reinforcement Learning

计算机科学 强化学习 人工智能 机器学习 推论 弹道 人工神经网络 样品(材料) 化学 物理 色谱法 天文
作者
Jiangfeng Nan,Weiwen Deng,Ruzheng Zhang,Rui Zhao,Ying Wang,Juan Ding
出处
期刊:IEEE transactions on intelligent vehicles [Institute of Electrical and Electronics Engineers]
卷期号:9 (2): 3998-4010 被引量:2
标识
DOI:10.1109/tiv.2023.3335218
摘要

Modeling driving behavior plays a pivotal role in advancing the development of human-like autonomous driving. In light of this, this paper proposes a car-following behavior modeling method with sample-based deep inverse reinforcement learning (DIRL). Due to the challenges associated with feature extraction and the limited fitting capacity of linear functions, traditional IRL, which employs feature-based linear functions to represent reward functions, exhibits low modeling accuracy. Accordingly, DIRL leverages deep neural networks to represent reward functions. However, the requirement for reinforcement learning to determine the optimal policy for DIRL's reward function makes the training and inference processes computationally resource-intensive and inefficient. To address this issue, this paper proposes the sample-based DIRL. Through solution space discretization, sample-based DIRL streamlines the integration calculation of the partition function into a summation, resulting in improved computational efficiency. Specifically, it is a three-stage framework: sampling candidate trajectories, evaluating candidate trajectories, and selecting the trajectory with the highest reward. In order to evaluate DIRL at both the level of driving behavior and the reward function, the MPC-based virtual driver with the explicit reward function is utilized to collect driving data for training and assessing the convergence of the learned reward function. The experimental results confirm that the proposed method can accurately model the car-following behavior, and acquire the driver's reward function from the driving data.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
1秒前
yanyanyan发布了新的文献求助10
1秒前
1秒前
旋转鸡爪子完成签到,获得积分10
2秒前
mumu完成签到,获得积分10
2秒前
3秒前
4秒前
4秒前
lin发布了新的文献求助10
4秒前
4秒前
5秒前
lee发布了新的文献求助10
5秒前
SSS完成签到 ,获得积分10
5秒前
小二郎应助你好采纳,获得10
5秒前
KK发布了新的文献求助10
5秒前
科研通AI5应助hyxu678采纳,获得10
5秒前
5秒前
科研通AI5应助风轩轩采纳,获得10
6秒前
6秒前
ZWL001发布了新的文献求助10
7秒前
天下一番发布了新的文献求助10
7秒前
加减乘除发布了新的文献求助10
8秒前
今后应助yanyanyan采纳,获得10
8秒前
陈艳林发布了新的文献求助10
9秒前
Akim应助zhangzhuopu采纳,获得10
9秒前
10秒前
QingLiu发布了新的文献求助10
10秒前
余凌兰完成签到 ,获得积分10
10秒前
wyq发布了新的文献求助10
11秒前
11秒前
11秒前
韩夏菲完成签到,获得积分10
12秒前
范小楠完成签到,获得积分10
13秒前
13秒前
好了完成签到,获得积分10
15秒前
rong发布了新的文献求助10
15秒前
15秒前
QingLiu完成签到,获得积分10
16秒前
KK完成签到,获得积分10
16秒前
高分求助中
All the Birds of the World 4000
Production Logging: Theoretical and Interpretive Elements 3000
Les Mantodea de Guyane Insecta, Polyneoptera 2000
Machine Learning Methods in Geoscience 1000
Resilience of a Nation: A History of the Military in Rwanda 888
Musculoskeletal Pain - Market Insight, Epidemiology And Market Forecast - 2034 666
Crystal Nonlinear Optics: with SNLO examples (Second Edition) 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3734729
求助须知:如何正确求助?哪些是违规求助? 3278704
关于积分的说明 10010684
捐赠科研通 2995337
什么是DOI,文献DOI怎么找? 1643335
邀请新用户注册赠送积分活动 781114
科研通“疑难数据库(出版商)”最低求助积分说明 749249