估计员
数学
迭代函数
有效估计量
不变估计量
最小方差无偏估计量
三角洲法
应用数学
一致估计量
渐近分布
极大极小估计
功能(生物学)
斯坦因无偏风险估计
组合数学
统计
数学分析
生物
进化生物学
作者
Federico A. Bugni,Jackson Bunting
标识
DOI:10.1093/restud/rdaa032
摘要
Abstract We study the first-order asymptotic properties of a class of estimators of the structural parameters in dynamic discrete choice games. We consider $K$-stage policy iteration (PI) estimators, where $K$ denotes the number of PIs employed in the estimation. This class nests several estimators proposed in the literature. By considering a “pseudo likelihood” criterion function, our estimator becomes the $K$-pseudo maximum likelihood (PML) estimator in Aguirregabiria and Mira (2002, 2007). By considering a “minimum distance” criterion function, it defines a new $K$-minimum distance (MD) estimator, which is an iterative version of the estimators in Pesendorfer and Schmidt-Dengler (2008) and Pakes et al. (2007). First, we establish that the $K$-PML estimator is consistent and asymptotically normal for any $K \in \mathbb{N}$. This complements findings in Aguirregabiria and Mira (2007), who focus on $K=1$ and $K$ large enough to induce convergence of the estimator. Furthermore, we show under certain conditions that the asymptotic variance of the $K$-PML estimator can exhibit arbitrary patterns as a function of $K$. Second, we establish that the $K$-MD estimator is consistent and asymptotically normal for any $K \in \mathbb{N}$. For a specific weight matrix, the $K$-MD estimator has the same asymptotic distribution as the $K$-PML estimator. Our main result provides an optimal sequence of weight matrices for the $K$-MD estimator and shows that the optimally weighted $K$-MD estimator has an asymptotic distribution that is invariant to $K$. The invariance result is especially unexpected given the findings in Aguirregabiria and Mira (2007) for $K$-PML estimators. Our main result implies two new corollaries about the optimal $1$-MD estimator (derived by Pesendorfer and Schmidt-Dengler (2008)). First, the optimal $1$-MD estimator is efficient in the class of $K$-MD estimators for all $K \in \mathbb{N}$. In other words, additional PIs do not provide first-order efficiency gains relative to the optimal $1$-MD estimator. Second, the optimal $1$-MD estimator is more or equally efficient than any $K$-PML estimator for all $K \in \mathbb{N}$. Finally, the Appendix provides appropriate conditions under which the optimal $1$-MD estimator is efficient among regular estimators.
科研通智能强力驱动
Strongly Powered by AbleSci AI