调节器
扩散
统计物理学
熵(时间箭头)
数学
数理经济学
计算机科学
物理
热力学
化学
生物化学
基因
作者
Yinuo Wang,Likun Wang,Yuxuan Jiang,Wenjun Zou,Tong Liu,Xujie Song,Wenxuan Wang,Xiao Liming,Jiang Wu,Jingliang Duan,Shengbo Eben Li
出处
期刊:Cornell University - arXiv
日期:2024-05-23
标识
DOI:10.48550/arxiv.2405.15177
摘要
Reinforcement learning (RL) has proven highly effective in addressing complex decision-making and control tasks. However, in most traditional RL algorithms, the policy is typically parameterized as a diagonal Gaussian distribution with learned mean and variance, which constrains their capability to acquire complex policies. In response to this problem, we propose an online RL algorithm termed diffusion actor-critic with entropy regulator (DACER). This algorithm conceptualizes the reverse process of the diffusion model as a novel policy function and leverages the capability of the diffusion model to fit multimodal distributions, thereby enhancing the representational capacity of the policy. Since the distribution of the diffusion policy lacks an analytical expression, its entropy cannot be determined analytically. To mitigate this, we propose a method to estimate the entropy of the diffusion policy utilizing Gaussian mixture model. Building on the estimated entropy, we can learn a parameter $\alpha$ that modulates the degree of exploration and exploitation. Parameter $\alpha$ will be employed to adaptively regulate the variance of the added noise, which is applied to the action output by the diffusion model. Experimental trials on MuJoCo benchmarks and a multimodal task demonstrate that the DACER algorithm achieves state-of-the-art (SOTA) performance in most MuJoCo control tasks while exhibiting a stronger representational capacity of the diffusion policy.
科研通智能强力驱动
Strongly Powered by AbleSci AI