后悔
缩放比例
数学
估计员
李普希茨连续性
极限(数学)
随机微分方程
应用数学
计算机科学
数学优化
统计物理学
统计
数学分析
物理
几何学
出处
期刊:Management Science
[Institute for Operations Research and the Management Sciences]
日期:2023-12-06
被引量:1
标识
DOI:10.1287/mnsc.2023.4964
摘要
We use the lens of weak signal asymptotics to study a class of sequentially randomized experiments, including those that arise in solving multiarmed bandit problems. In an experiment with n time steps, we let the mean reward gaps between actions scale to the order [Formula: see text] to preserve the difficulty of the learning task as n grows. In this regime, we show that the sample paths of a class of sequentially randomized experiments—adapted to this scaling regime and with arm selection probabilities that vary continuously with state—converge weakly to a diffusion limit, given as the solution to a stochastic differential equation. The diffusion limit enables us to derive refined, instance-specific characterization of stochastic dynamics and to obtain several insights on the regret and belief evolution of a number of sequential experiments including Thompson sampling (but not upper-confidence bound, which does not satisfy our continuity assumption). We show that all sequential experiments whose randomization probabilities have a Lipschitz-continuous dependence on the observed data suffer from suboptimal regret performance when the reward gaps are relatively large. Conversely, we find that a version of Thompson sampling with an asymptotically uninformative prior variance achieves near-optimal instance-specific regret scaling, including with large reward gaps, but these good regret properties come at the cost of highly unstable posterior beliefs. This paper was accepted by Baris Ata, stochastic models and simulation. Supplemental Material: The data and online appendix are available at https://doi.org/10.1287/mnsc.2023.4964 .
科研通智能强力驱动
Strongly Powered by AbleSci AI