As an effective dimensionality reduction method, unsupervised feature selection (UFS) focuses on the mutual correlations between high-dimensional data features but often overlooks the intrinsic relationships between instances. We also utilize pseudo-labels learned from the data to guide feature selection in UFS. However, the raw data space may contain noise and outliers, leading to a lower accuracy of the learned pseudo-label matrix. We propose a minimum-redundant UFS approach to tackle these problems through jointing sparse latent representation learning with dual manifold regularization (SLRDR). Firstly, SLRDR learns a subspace of latent representation by exploring the interconnection of original data. To enhance subspace sparsity, ℓ2,1-norm is applied to the residual matrix of latent representation learning. Pseudo-label matrix learning is then carried out in the high-quality latent space, resulting in effective pseudo-label information that can provide more useful guidance for sparse regression. Secondly, based on the manifold learning hypothesis, SLRDR exploits features' local structural properties in feature space and explores the association between data and labels, allowing the model to learn richer and more accurate structural information. In addition, ℓ2,1/2-norm is imposed on the weight matrix to obtain a minimum-redundant solution and select more discriminative features. Finally, an alternating iterative method is used for SLRDR to solve the optimization problem of the objective function, and the convergence of the model is theoretically analyzed. Besides, a series of comparative experiments with ten existing algorithms on nine benchmark datasets are used to verify the model's effectiveness.