We consider the problem of optimally scheduling transmissions for remote estimation of a discrete-time autoregressive Markov process that is driven by white Gaussian noise. A sensor observes this process, and then decides to either encode the current state of this process into a data packet and attempts to transmit it to the estimator over an unreliable wireless channel modeled as a Gilbert-Elliott channel [1]–[3], or does not send any update. Each transmission attempt consumes $\lambda$ units of transmission power, and the remote estimator is assumed to be linear. The channel state is revealed only via the feedback (ACK/NACK) of a transmission, and hence the channel state is not revealed if no transmission occurs. The goal of the scheduler is to minimize the expected value of an infinite-horizon cumulative discounted cost, in which the instantaneous cost is composed of the following two quantities: (i) squared estimation error, (ii) transmission power. We posed this problem as a partially observable Markov decision process (POMDP), in which the scheduler maintains a belief about the current state of the channel, and makes decisions on the basis of the current value of the error $e (t)$ (defined in (6)), and the belief state. To aid its analysis, we introduce an easier-to-analyze "folded POMDP." We then analyze this folded POMDP and show that there is an optimal scheduling policy that has threshold structure, i.e. for each value of the error $e$ , there is a threshold $b^{\ast} (e)$ such that when the error is equal to $e$ , this policy transmits only when the current belief state is greater than $b^{\ast}(e)$ .