Abstract Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models' usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study population. This article presents the results of a simulation study that examines the performance of likelihood-based tests and the traditionally used Information Criterion (ICs) used for determining the number of classes in mixture modeling. We look at the performance of these tests and indexes for 3 types of mixture models: latent class analysis (LCA), a factor mixture model (FMA), and a growth mixture models (GMM). We evaluate the ability of the tests and indexes to correctly identify the number of classes at three different sample sizes (n = 200, 500, 1,000). Whereas the Bayesian Information Criterion performed the best of the ICs, the bootstrap likelihood ratio test proved to be a very consistent indicator of classes across all of the models considered. ACKNOWLEDGMENTS Karen L. Nylund's research was supported by Grant R01 DA11796 from the National Institute on Drug Abuse (NIDA) and Bengt O. Muthén's research was supported by Grant K02 AA 00230 from the National Institute on Alcohol Abuse and Alcoholism (NIAAA). We thank Mplus for software support, Jacob Cheadle for programming expertise, and Katherine Masyn for helpful comments. Notes 1In general, the within-class covariance structure can be freed to allow within-class item covariance. a Item probabilities for categorical LCA models are specified by the probability in each cell, and the class means for the continuous LCA are specified by the value in parentheses. 2The number random starts for LCA models with categorical outcomes was specified to be "starts = 70 7;" in Mplus. The models with continuous outcomes had differing numbers of random starts. 3It is important to note that when coverage is studied, the random starts option of Mplus should not be used. If it is used, label switching may occur, in that a class for one replication might be represented by another class for another replication, therefore distorting the estimate. 4The models that presented convergence problems were those that were badly misspecified. For example, for the GMM (true k = 3 class model) for n = 500, the convergence rates for the three-, four-, and five-class models were 100%, 87%, and 68%, respectively.