衰退
自回归模型
计算机科学
频道(广播)
调度(生产过程)
马尔可夫过程
马尔可夫链
马尔可夫模型
实时计算
数学优化
计算机网络
计量经济学
统计
数学
机器学习
作者
Manali Dutta,Rahul Singh
标识
DOI:10.1109/cdc49753.2023.10384144
摘要
We consider the problem of optimally scheduling transmissions for remote estimation of a discrete-time autoregressive Markov process that is driven by white Gaussian noise. A sensor observes this process, and then decides to either encode the current state of this process into a data packet and attempts to transmit it to the estimator over an unreliable wireless channel modeled as a Gilbert-Elliott channel [1]–[3], or does not send any update. Each transmission attempt consumes $\lambda$ units of transmission power, and the remote estimator is assumed to be linear. The channel state is revealed only via the feedback (ACK/NACK) of a transmission, and hence the channel state is not revealed if no transmission occurs. The goal of the scheduler is to minimize the expected value of an infinite-horizon cumulative discounted cost, in which the instantaneous cost is composed of the following two quantities: (i) squared estimation error, (ii) transmission power. We posed this problem as a partially observable Markov decision process (POMDP), in which the scheduler maintains a belief about the current state of the channel, and makes decisions on the basis of the current value of the error $e (t)$ (defined in (6)), and the belief state. To aid its analysis, we introduce an easier-to-analyze "folded POMDP." We then analyze this folded POMDP and show that there is an optimal scheduling policy that has threshold structure, i.e. for each value of the error $e$ , there is a threshold $b^{\ast} (e)$ such that when the error is equal to $e$ , this policy transmits only when the current belief state is greater than $b^{\ast}(e)$ .
科研通智能强力驱动
Strongly Powered by AbleSci AI