强化学习
最大熵原理
计算机科学
独立同分布随机变量
熵(时间箭头)
人工智能
任务(项目管理)
机器学习
数学
随机变量
工程类
统计
物理
系统工程
量子力学
作者
Thomas A. Berrueta,Allison Pinosky,Todd D. Murphey
出处
期刊:Cornell University - arXiv
日期:2023-01-01
标识
DOI:10.48550/arxiv.2309.15293
摘要
The assumption that data are independent and identically distributed underpins all machine learning. When data are collected sequentially from agent experiences this assumption does not generally hold, as in reinforcement learning. Here, we derive a method that overcomes these limitations by exploiting the statistical mechanics of ergodic processes, which we term maximum diffusion reinforcement learning. By decorrelating agent experiences, our approach provably enables single-shot learning in continuous deployments over the course of individual task attempts. Moreover, we prove our approach generalizes well-known maximum entropy techniques, and robustly exceeds state-of-the-art performance across popular benchmarks. Our results at the nexus of physics, learning, and control pave the way towards more transparent and reliable decision-making in reinforcement learning agents, such as locomoting robots and self-driving cars.
科研通智能强力驱动
Strongly Powered by AbleSci AI